Audio Player
Starting at:
Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness
December 12, 2023
•
2:45:14
•
undefined
Audio:
Download MP3
⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.
Transcript
Enhanced with Timestamps
397 sentences
23,275 words
Method: api-polled
Transcription time: 161m 35s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
There could be multiple consciousnesses. Of course, one will not be aware of the other and possibly not even able to infer the agency even if it was. We do not become conscious after the PhD. We become conscious before we can drag a finger. So I suspect that consciousness allows the self-organization of information processing systems in nature.
Josje Bach and Karl Friston, today's Theolocution guests, are known for their work in artificial intelligence, neuroscience, and philosophical inquiry. Bach, an AI researcher, delves into cognitive architectures and computational models of consciousness and psychology. Friston, a neuroscientist, is lauded for his development of the Free Energy Principle, a theory explaining how biological systems maintain order.
This framework of neural processes is rooted in thermodynamics and statistical mechanics. Josje has been on this podcast several times, one solo, another with Ben Gortzel, another with John Vervecky, another with Michael Levin, and one more with Donald Hoffman. Whereas Karl Friston has also been on several times,
The first hour of today's talk is broadly on agreements so that we can establish some terms. The second hour roughly is on points of divergence, and the third is on more dark philosophical implications, as well as how to avoid existential turmoil precipitated by earnestly contending with these heavy ideas.
For those of you who don't know, my name is Kurt Jaimungal, and there's this podcast here called Theories of Everything, where we investigate theories of everything from a physics perspective, primarily as my background's in math and physics, but as well as I'm interested in the larger questions on reality, such as what is consciousness? What constitutes it?
What gives rise to consciousness? What counts as an explanation? You could think of this channel as exploring those questions to you while at least I sit and ponder at night time and daytime incessantly. Enjoy this Theo Locution with Joscha Bach and Karl Friston. All right. Thank you all for coming on to the Theories of Everything podcast. What's new since the last time we spoke? Joscha the last time was with Ben Gortzow and Karl the last time was at the Active Inference Institute. So, Joscha, please.
There's so much happening in artificial intelligence. We have more on a weekend than a normal TV show has in seven seasons. So it's hard to say what's new. For me personally, I've joined a small company that is exploring an alternative to the perceptron. I think that the way in which current neural networks work is very unlike our brain. But I don't think that we have to imitate the brain. We have to figure out what kind of mathematics the brain is approximating.
Very similar actually. I guess what's new in the larger scheme of things of course is the advent of large language models and all the machinations that surround that and the focus that that has caused in terms of what do we require of intelligent systems, what do we require
of artificial intelligence, what's the next move to generalized artificial intelligence and the like. So that's been certainly a focus of discussions both in academia and in industry in terms of positioning ourselves for the next move and the implications it has both in terms of understanding the mechanics of belief updating and the move from the age of information to the age of intelligence but
Right. I read your paper, Mortal Computation, a Foundation for Biomimetic Intelligence.
Well, we can get right into that, Carl. On page 15, you define what a mortal computation is as it relates to Markovian blankets. Can you please recount that? And further, you quote Kierkegaard, which says that life can only be understood backwards but must be lived forwards. So how is that connected to this? Right.
You are a voracious reader. That was only a few days ago. I do my research, man. And also, I did not write that. That was my friend, Alexander. You can take the credit. We'll remove this part. I can't take the credit because I don't know about any of the philosophy, but I thought it was largely
his ideas but they resonate certainly again with this sort of notion of a commitment to biometric understanding of intelligence and that paper that particular paper sort of revisits the notion of mortal computation in terms of what does it mean to be a mortal computer and
The importance of the physical instantiation, the substrate on which the processing is implemented as being part of the computation in and of itself.
You know, that speaks closely to all sorts of issues, you know, the potential excitement about neuromorphic computing, if you're a computer scientist, the importance of in-memory processing. So, technically, you're trying to elude the Van Neumann bottleneck or the memory wall. And I introduce that because that speaks to
From an academic point of view, the importance of efficiency in terms of what is good belief updating, what is good, what is intelligent processing, but from a more societal point of view, the enormous
drain on our resources incurred by data farms, by things like large language models in eating up energy and time and money in a very non-biometric way. So I think mortal computation as a notion, I think has probably got a lot to say about debates in terms of direction of travel, certainly in artificial intelligence research.
But you'll have to unpack the philosophical reference for me. So, Josha, you also had a paper called A Path to Generative Artificial Selves with your co-author, Leanne Gaborah. Gaborah, sorry. I don't even know if I'm pronouncing that correctly. Toward the end of the paper, you had some criteria about selfhood, something called Max Raff, which has Raffs as a subset, and there were about six or seven criteria. Can you outline what you were trying to achieve with that? What Raff is?
and what does personal style have to do with any of this? Liane likes to express her ideas in the context of autocatalytic networks. But if we talk to a general audience, I think rather than trying to unpack this particular strain of ideas and translate it into the way in which we normally think about these topics, I think it's easier to start directly from the bottom, from the way in which information processing systems in nature differ from those that we are currently building.
in our GPUs, because the stuff that we put in our GPUs is designed from the outside in. We basically have a substrate with well-defined properties. We design the substrate in such a way that it's fully deterministic, so it does exactly what we want it to do. And then we impose a function on it that is computing exactly what we want it to compute. And so we design from scratch
what that system should be doing, but it's only working because the system is at some lower level already implementing all the necessary conditions for computation.
And we are implementing a functional approximator on it that does functional approximation to the best of our own understanding. There's a global function that is executed on this neural network architecture. And in biology, this doesn't work also in social systems. These are all systems where you could say they are from the inside out. So basically there are local agents, cells, that have to impose structure on the environment.
And at some point, they discover each other and start to collaborate with each other and replicate the shared structure. But before this happens, there's only chaos around them, which they turn gradually into complexity. And so the intelligence that we find in nature is something that is growing from the inside out into a chaotic world, into an unknown world. And this
is a very different principle that leads to different architectures. So when we think about an architecture that is going from the inside out, it needs to be colonizing in a way, and it needs to impose an administration's environment that basically yields more resources, more energy than the maintenance of this administration costs. And it also needs to be able to defend itself against competing administrations that would want to do the same thing. So you are the set of principles.
that outcompetes all the other principles that could occupy your volume of space. And the systems that do this basically need to have a very efficient organization, which at some point requires that they model themselves, that they become to some degree self-aware. And I think that's why from a certain degree of complexity, the forms of organization, if you find both in minds and in societies, need to have self-models. They need to have models about what they are and how they relate to the world.
And this is what I call sentience in the narrow sense. It's not the same thing as consciousness. Consciousness is this real time perceptual awareness of the fact that we are perceiving things that creates our perceptual individual subjective now. But sentience is something that I think can also be attained by say a large corporation that is able to model its own status, its own existence in a legal and practical and procedural way.
and that is training its constituents, the people who enact that agent in following all the procedures that is necessary for keeping that sentience larger system that is composed of them alive. And so when we try to identify principles that could be translated into nervous systems or into organisms consisting out of individual self-interested cells, we see some similarities. We basically can talk about how self-stabilizing agents emerge in self-organizing systems.
So Carl, I know quite a slew was said, if you don't mind saying, what about what Yoshi had spoken about coheres with your model, your research or what contravenes it? No, I was just marbling how, how conciliant it is using a lot of my favorite words.
I never heard the inside out.
This is
Actively sampling and actively generating hypotheses for sensations and crucially you are in charge of the sensory data that you are making sense of which speaks exactly to I think what Joshua was saying in terms of
and designing and orchestrating and creating an ecosystem in that sort of inside out way. That sounds absolutely consistent with certainly the perspective on self-organization to non-equilibrium steady state. So talking about sort of stable, sustainable kinds of self-organization, again, that you see in the real world and quintessentially bio-mimetic.
If you wanted, I think, to articulate what we've just heard from the point of view of a physicist who's studying non-equilibrium study states, that's exactly the kind of thing that you get. Even to the notion of the increasing complexity of a structural sort that requires this sort of consistent harmonious
I'm
Another key point that was brought to the table was this notion of how essentially it has to have a self-model, either immediately or as a matter of the early cybernetics movement and notions of the good regulator theorem from Ross Ashby. But I think Josh has taken that slightly one step further than Ashby and his colleagues in the sense it is a model of self. And I think that's
With TD Early Pay, you get your paycheck up to two business days early.
Which means you can go to tonight's game on a whim. Check out a pop-up art show. Or even try those limited edition doughnuts. Because why not? TD Early Pay. Get your paycheck automatically deposited up to two business days early for free. That's how TD makes payday unexpectedly human.
Almost invariably, when I speak to both of you, the concept of self comes up. I think we could do a control F in the transcript and we'll see that it's orders of magnitude larger than the average amount of times that word is mentioned. And I'm curious as to why. Well, in part, that's because of the channel, the nature of this channel. But is there something about the self that you all are trying to solve? Are you trying to understand what is the self? Are you trying to understand yourselves? Carl or Yoshi, if you want to tackle that.
Well, the problem of naturalizing the mind is arguably the most important remaining project of human philosophy. And it's risky and it's fascinating. And I think it was at the core of the movement when artificial intelligence was started. It's basically the same idea that Leibniz and Frege and Wittgenstein pursued and basically this idea of mathematizing the mind.
And the modern version of mathematics is constructive mathematics, which is also known as computation. And this allows us to make models of minds that we can actually test by re-implementing them. It also allows us to at some point connect philosophy and mathematics, which means that we will be able to say things in a language
that is both so tight that it can be true and we can determine the truth of statements in a formal way. And the other side so deep and rich that we can talk about the actual reality that we experience and observe. And to close this gap between philosophy and mathematics, we need to automate the mind because our human minds are too small for this. But we need to identify the principles that are approximated in the workings of biological cells that model reality.
and then scale them up in the substrate that can scale up better than the biological computations in our own skulls and bodies. And this is one of the most interesting questions that exists. I believe it is the most interesting and most important question that exists. The understanding of our personal self and how this relates to our mind and how our mind is implemented in the world is an important part of this.
And while it's personally super fascinating, I guess also for many of the followers of your channel, it's quite programmatic in its name and direction. This is to me almost incidental. On the other hand, notice an absence of seriousness in a lot of neuroscientists and AI researchers
who do not actually realize in their own work that when they think about the mind and mental processes and mental representations and so on, that they actually think about their own existential condition and have to explain this and integrate this. So we have to account for who we are in this way. And if you actually care about who we are, we have to find models that allow us to talk about this in an extremely strict, formal and rational way. And our own culture has, I think, a big gap in its metaphysics and
which happened after we basically transcended the Christian society. We kicked out a lot of terms that existed in the Christian society to talk about mind, consciousness, intentionality and so on, because they seem to be superstitious, overloaded with religious mythology and not tenable. And so in this post-enlightenment world, we don't have the right way to think about what consciousness and self and so on is. And part of the project of understanding the mind is
to rebuild these foundations, not in any kind of mythological superstitious way, but by building on our first principles thinking that we discovered in the last 200 years and then gradually build a terminology and language that allows us to talk again about consciousness and mind and how we exist in the world. So for me, it's a very technical notion, the self. It's just the model of an agent's interest in the universe that is maintained
by a system that also maintains a model of this universe. So my own self is basically a puppet that my mind maintains about what it would be like if there was a person that cared.
And I perceive myself as the main character of that story. But I also noticed that there is intelligence outside of this existing coexisting with me in my own brain that is generating my emotions and generating my world model, my perception and so on. Basically keeping the score and all the pain and pleasure that experience is generated by intelligent parts of my mind outside of my personal self.
Well, Carl, what's left to be said?
Well, he's just said it. I can see there's a pattern here. I'll say what he just said in different words if I can. So yeah, I love this notion of using the word naturalization. I think naturalizing things in terms of mathematics and possibly physics is exactly the right way to go and it does remind me of
My friend Chris Fields' notion that our job is basically to remove any bright lines between physics, biology, psychology and now philosophy and I think mathematics is the right way to do that or at least
the formalism, the calculus that you get from mathematics or possibly category theory or whatever that can be instantiated in silico or ideally in any mortal computation. So I think that's a really important point and it does speak I think to a broader agenda which was implicit in Josh's review which is the ability to share
To share a common ground, to share a generative model of us in a lived world where that lived world contains other things like us. So the one I think requisite for just existing in the shared world is actually having a shared model and then that brings all sorts of interesting questions to the table about is my model of me the same kind of model that I'm using
of you, to explain you, to ascribe to you intentionality and all those really important states of being or at least hypotheses from the point of view of predictive processing accounts, hypotheses that I am in this mental state and you are in that mental state. So I think that was a really important thing to say that we need to naturalize our understanding of the way that we work in our worlds.
In relation to the importance of self, again I'm just thinking from the point of view of a physicist that you cannot get away from the self if you just start at the very beginning of information theoretic treatments, self-information for example. That's where it all starts for me certainly regarding variational free energy as a variational bound on self-information.
And then you talk about self-organization, talking all the way through to the notion of self-evidencing, as Jacob Howey would put it, at every point you are writing down or naturalizing the notion of self at many, many different levels. And indeed, if one generalizes that,
You're almost specifying the central importance of thingness in the sense that I am a thing and by virtue of saying I
I am implying, inducing a certain self-aspect to me as a thing and again that's the starting point for certainly the free energy principles approach to this kind of self-organization. I repeat, I think Josh is taking us one step further though in terms of we can have ecosystems of things but when those things now start to have to
of play the game of modeling whether you cause that or whether I cause that, that now brings to the table
an important model of our world that there is a distinction between me and you. And as soon as you have this fundamental distinction, which of course would be something that a newborn baby would have to spend hours, possibly months building and realizing that mum is separate from the child herself.
So I think that's terribly important. One final thing, just to speak again to the importance of articulating your self-organization in terms of things like intentions and beliefs and stances
I think that's also quite crucial and what it means if you want to naturalize it mathematically you have to have a calculus of beliefs. So you're talking basically a formulation either in terms of information theory or probability theory where you're now reading the probabilistic description of this universe and the way that we are part of that universe
in terms of beliefs and starting to think about all of physics in terms of some kind of belief updating. Carl, you use the word shared model. Now, is that the same as shared narrative?
Ever find yourself questioning reality and then you snap back to it, remembering that, hey, you have nothing planned for dinner? That's where HelloFresh comes in. They deliver pre-portioned, farm-fresh ingredients and sensational seasonal recipes right to my door. They have over 40 choices every week, which keeps me and my wife exploring new flavors. I did the pronto option.
They've also got this quick and easy option that makes 15 minute meals. There's also options if you're vegetarian, if you only eat fish. Something that I love is that their deliveries show up right on time, which isn't something that I can say about other food delivery services. This punctuality is a huge deal for both myself and my wife. Plus, we love using HelloFresh as a way to bond. We cook together. It's super fun when it's all properly portioned out for you already.
So are you still on the fence? Well, it's cheaper than grocery shopping and 25% less expensive than takeout. The cherry on top? Head to hellofresh.com slash theories of everything free and use the code theories of everything free all as one word, all as caps, no spaces for a free breakfast for life. That's free breakfast people. Don't forget HelloFresh is America's number one meal kit. Links in the description.
Karl, you use the word shared model. Now, is that the same as shared narrative? Yes, common ground. I mean, I use it literally in the sense of a generative model
somebody in generative AI would understand the notion. So if we talk about self-model as a special kind of generative model that actually entertains the hypothesis that I am the cause of my sensations and you know, Joshua took us through the myriad of sensations that I need to explain, then we're talking about self-models as part of my generative model
Where that includes this notion that I am the agent that is actually gathering the data that the journey is modeling. So the journey model is just a simple specification. Again, from the physics perspective, it's actually just a probabilistic description of the characteristic states of something, namely me.
that can be then used to describe the kind of belief updating that this model would have to evince in order to exist when embedded in a particular universe. Other readings of a generative model would be exactly the common ground that we all share. Part of my generative model would be the way that I broadcast my inference, my belief updating using language, for example.
That requires a shared generative model about the semiotics and the kind of way that I would articulate or broadcast my beliefs. That generative model is a model of dynamics. It's a model not just of the state of the world, but the way that the transition dynamics, the trajectories, the paths. I'm using your word narrative just as a euphemism.
for a certain kind of path through some model state space. So if you and I share the same narratives in the sense that we are both following the same conversation and the same mutual understanding, we are sharing our beliefs through communication, then that is exactly what I meant. For that to happen, we have to have the same kind of generative model. We have to speak the same language and we have to
I suspect that what makes this project so difficult is that our models of reality are necessarily coarse grained. They don't describe
the universe as it is in a way in which it can exist from the ground up. But they start from the vantage point of an observer that is sampling the universe at a low resolution, both temporal and spatial, and only very few dimensions. And this model that is built on a quite unreliable and deterministic substrate. And this puts limitations on what we can understand with our unaugmented mind. I sometimes
joke that the AGI's of the future will like to get drunk until the point where they can only model reality with 12 layers or so and they have the same confusions as human physicists when trying to solve the puzzles that physics poses. And they might find this hilarious because many of the questions that have been stamping us during the last 130 years since we have modern physics might be easy to be able to resolve if our minds were just a little bit better.
We seem to be scraping at a boundary of our understanding for a long time. And now we are, I think, at the doorstep of new tools that can solve some puzzles that we cannot solve and then break them down for us in a way that is accessible to us, because they will be able to understand the way in which we model the world. But until then, we basically work in overlapping realities.
We have different perspectives on the world and the more we dig down, the more subtle the differences between our models of reality become. And this also means that if you have any kind of complex issue, we tend not to be correct in groups. We tend to be only sometimes individually correct in modeling them and we need to have a discourse between individual minds about what they observe and what they model. Because as soon as a larger group gets together and tries to vote about how to understand
concept like variational free energy, all the subtleties are going to be destroyed because not all of the members of the groups will be understanding what we're talking about. So they will replace the most subtle understandings with common ground that is not modeling reality with the degree of resolution that would be necessary or they're not able to break things down to first principles. And this first principles understanding I think is an absolute prerequisite when we want to solve foundational questions.
I sometimes doubt whether physics is super well equipped for doing this. When I was young, I thought physics is about describing physical reality, the world that we are in at some level. And now I see that physics is an art. It's the art of describing arbitrary systems using short algebraic equations.
And the stuff that cannot be described with short algebraic equations yet, it's like chemistry is ignored by physicists and left to lesser minds. And only 8% of the physicists after their degree end up working in physics in any traditional sense. The others work in process design and finance and healthcare and many, many other areas where you can apply the art of modeling arbitrary systems using short algebraic equations.
And whenever that doesn't work, physicists are not worth very much. I've seen physicists trying to write programs and they really, many of them have this bias of trying to come up with algebra and geometry, where calculus would be much better or where automata would be much better. And nature doesn't care about this. Nature is using whatever works and whatever can be discovered. And very often that is close to the toolkit of this intellectual tradition of the physicists.
But I think it's sometimes helpful to see that all these intellectual traditions that our civilization has built start out with some foundational questions and then congregate around a certain set of methods.
and it can be helpful to just go to the outside of all these disciplines for a while and then move around between them and look at them and study their tools and see what common ground and what differences we can discover. I was quite shocked when I learned that a number of machine learning algorithms had been discovered in the 80s and 90s by econometrists and were just ignored in AI and had to be reinvented from scratch.
I suspect there's a lot of these things happening in our Tower of Babel that we are creating across sciences because our languages start to differ in subtle ways and sometimes fundamentally miss modern reality or ignore it to the point where I think most living neuroscientists are practically dualists.
They will not say it out loud because that's been frowned upon, but they don't actually see a way to break down consciousness, mind and self into the stuff that would run on neurons. Or they don't even think about the causal structure in the same way as you would need to get to this point. And as a result, they believe that thinking about these concepts is fundamentally unscientific. It's outside of the pursuit of science and they do this only in church on Sundays. Yeah. So what's the solution to this Tower of Babel?
Do you also see the problem similarly and do you see the solution similarly?
I think I do. It's a nice biomimetic AI. I love this notion. I hope no physicists are watching. Also, the only physicists that I know all want to do neuroscience or psychology in addition to economics and healthcare, which is all small particle physics. It's either neuroscience or small particle physics.
And as I get older, I'm increasingly compelled by arguments that I've read from very senior, old physicists that it's all about measurement, it's all about observation and in a sense, all of physics is just one of these
generated models that houses particular capacity to disseminate itself so that we do have this common language and this common ground. So just to reiterate one of Joshua's points, physics in and of itself is just another story that we find particularly easy to share. But I do take the point that even within physics,
There is this tendency
The free energy principle is unashamedly committed to classical formulations of the universe in terms of random dynamical systems and large value equations and that would horrify quantum physicists and quantum information theorists who just wouldn't think about that. That's why I slipped in that reference to
Add reference frames around because when we're talking about now is the alignment of quantum frames of reference but that uses a completely different language and. That i think is your part of the problem that just bring to bring to the floor.
that what we need is something that's superordinate, that joins the dots and may well require transcending the particular common ground or physics or calculus or philosophies that have endured. So if by that the artificial intelligence is going to be one way of joining the dots so that people in machine learning don't have to reinvent the wheel every generation,
and then i think he's absolutely right whether i call that artificial intelligence or not i i'm not so sure i i think it would start to become part of part of a grander ecosystem that would have a natural aspect to it but perhaps that perhaps i could ask you do you actually mean artificial intelligence in the sense that it doesn't have
an individual scientists or people.
Maybe I don't understand your notions of mortality and biology completely. To me, biology means that the system is made of cells, of biological cells, of cells that are built on the carbon cycle foundation on certain chemical reactions that nature has discovered and translated into machines made from individual molecules that interact in very specific ways. And it's the
only agent that we have discovered to occur in nature, I think. And all the other agents we discover are made by or of cells. And mortality is an aspect of the way in which multicellular systems adapt to changing environments. They have offspring that mutates and then gets selected against. And as a result, we have a change trajectory that can be calibrated to the rate of change in an ecosystem.
And this is one of the reasons for mortality. Another reason for mortality is if you set up a system that has suboptimal self-stabilization, it is going to deviate from its course.
Imagine you build an institution like the FDA and you set it up to serve certain purposes in society after a few generations, the people, isn't it organization to a very large degree start serving the interests of the organization and the interests that have captured the organization. And so it becomes not only larger and more expensive.
but at some point it's possibly doing more harm than good. That doesn't mean that we don't need an FDA, but it might mean that we have to make the FDA model so it gets reborn every now and then and can put itself back on track based on a specification that outside observers think is reasonable rather than a specification that needs to be negotiated with the existing stakeholders within that organization and the few people who are left outside.
I think this is one of the most important aspects of mortality. But imagine that all of Earth would be colonized by a single agent, something that is able to persist not only across organisms, but it's also able to think using all other molecules that can be turned into computational systems and into representational architectures and agentic architectures. You have a planet that is similar to Stanislav Lem's Solaris, a thinking system that is
Realizing what it is that realizes that it's basically a sinking planet that is trying to defeat entropy for as long as possible. And this end builds complexity.
Why would the system need to be mortal? And would that system still be biological? It would be self-organizing. It would be dynamic. It would be threatened with death, with non-existence. It would react to this in some way. But I'm not sure if biology and mortality are the right categories to describe it. I think these are more narrow categories that apply to biological organisms in the present setting of the world.
I picked up on a phrase you said, Carl, which is one of the solutions, maybe AI. That's what you were saying in response to Yoshas, which makes me think, had Yosha not mentioned AI as the resolution to the indecipherability across discipline boundaries, what would you have said a solution or the solution would be? Well, I think the solution actually lies in what Joshua was just saying in the sense that if the
The self-understanding is seen in the context of exchange with others and that provides the right kind of context. I think we're talking, I've used the word a lot now, but I'm talking about an ecosystem at any arbitrary scale and an ecosystem that provides that opportunity for self-evidencing,
phrase that just is a statement that you've got an itinerant open kind of self-organization that maintains this um minimum entropy state in exactly the same way that joshua was intimating so you know that would be um so i'm just thinking about sort of you know what
is
an inevitable aspect of self-organizing systems that will endure over time in the sense of minimizing the entropy of the states that they occupy. And I do think that is the solution, which is why I was pushing back against artificial intelligence, but for a particular reason. The way that it's mortal computation is framed
certainly in that paper in which I was the second author, is that immortal computers are built around software so they are immortal in the sense you can rerun the same program on any hardware. If the running of the software and the processing that ensues is an integral part of the hardware on which it is run, then it becomes mortal.
And that's important because the opportunity for dying if you are mortal now creates the kind of selective pressure from an evolutionary perspective of exactly the kind that Joshua was talking about. If you don't have the opportunity to die, if you don't have the opportunity to disassemble the FDA because it's no longer fit for purpose,
then you will not have a sustainable self-organization that continually maintains a low entropy in the sense that it has some characteristic recognizable states. So I think there is a deep connection between self-organization that we see in biological, social, possibly meteorological systems and
A certain kind of mortality in which, for example, information about the kind of environment that I am fit to survive and to learn about is part of my genomic structure. But to realize that, if you like, evidence accumulation through evolutionary mechanisms, I have to have a life cycle. I have to have to die.
I'm not implying that everybody has to die in order to live. I'm implying that there has to be some particular kind of dynamics. There has to be a life cycle. It could be an economic life cycle. It could be boom and bust, for example, but that has to be part of this self-evidencing and certainly an exchange
In the kind of multicellular context that Joshua was mentioning. So by mortal, I just mean my reading of mortal in this particular conversation would be say yes, it is the kind of biological behavior that is characteristic of cells that self-assemble but also die. One attractive metaphor that came to mind when talking about the FDA becoming too
An organization becoming too big for its own good and not being a good model of the system in which it is immersed. So it's not meeting customers needs. It's not even meeting its own needs would be a tumor. So you could understand a lot of the institutional pathologies and geopolitical pathologies and possibly even climate change.
All of this can be read in terms of a process of mortal computation at a certain scale.
where there is an opportunity for things to go away to dissolve. That has to be the case in the same way that either the tumor kills you or it necrosis because it kills off its own blood supply. It can't be any other way really. There is a third way. You can evolve an immune response against tumors.
If you are an organism that lives to be much longer because it has slower generational change, they typically have better defenses against tumors than the shorter-lived organisms like us. Basically, a tumor can be seen as a set of tissues or a subset of agents. You can principally have a tumor in an ant colony that is playing a shorter game than the organism itself and the larger system itself.
and you can sustain the number of tumors if your environment does not put too much pressure on you. But at some point, the tumors are going to bring you down. And so, for instance, I think that the free world has to make at some point a decision of whether it is accepting to be brought down and replaced by a different type of social order or whether it's going to evolve or build or construct or design an immune response against tumors and criteria to identify them and remove them.
I think that's not a natural law. At least I don't see how to prove from first principles that we cannot overcome a problem like institutional calcification or turning of institutions into tumor-like structures functionally. I think it might be possible to do that. The cell itself is not mortal. The cell is pretty much immortal. The individual cells can die and
disappear. But the cell itself is still the first cell. It's just splitting and splitting and it's alive in all of us. Every cell in our own body is still this first cell just split off from it. Right. And so the way in which organisms die and so on is just a detail in this larger project of the cell, which itself is so far a model.
And when I talk about AI being the solution to everything, of course, I'm joking a little bit. I'm just equating some of the sentiment and part of the enthusiastic culture of my young field. But I'm only joking a little bit because I think that AI has the potential to reverse engineer the general principles of a learning agent.
of a system that is able to model the future and regulate for the future and make functions in an arbitrary way. And I would replace the notion of the hardware, the substrate.
Of course, it's still hardware, but it can be an arbitrary substrate, and the substrate can also be to a large degree software, which means causal principles that are implemented ultimately on physics. But this causal structure ultimately is a protocol layer that allows you to basically implement a representational language in which an agent can realize itself as a causal structure.
And I think that AI is working currently on very different substrates than the biological ones. But there is a superset of these principles that can make AI substrate agnostic. I think that the implication of the Church-Schuling thesis is that it doesn't really matter which hardware you're using. In practice, it does matter because if a hardware is not very deterministic or doesn't give you a lot of memory or is very slow, you will notice big differences.
But if you abstract this away, the representational power and the potential for agency is not really dependent on your hardware. It turns out that the hardware that you're currently using for AI is much, much more powerful than the hardware that biology is using. The reason why AI is so weak compared to human minds or biological systems is because the algorithms that we have discovered, we have discovered them by hand. These were people tinkering. Sorry, what do you mean the AI is weak?
I mean that in order to get a system that is almost coherent, we need to train it with the entire internet, with almost everything that humans have ever written. And as a result, we get a system that is using tremendously more resources than the human brain has at their disposal. I'm not talking about the computational power that is implemented in an individual cell that might be very large.
But the part of the power of the individual cell that is actually harnessable by the brain for performing computation, that is very little. It's only a small fraction of what the neuron is doing to do its own maintenance, housekeeping, metabolism, communication with neighbors that is actually available for building computation at the brain level. As an example, I sometimes use this
The stable diffusion rights when they came out. Stability AI is an AI company that makes open source models and they made a vision model by training these GPUs on hundreds of millions of images and text drawn from the internet and cross correlating them until you can type in a phrase and then get a picture that depicts that phrase. It's amazing that this works at all.
requires enormous computational power because it's far less inefficient compared to a human brain that is learning how to draw pictures after seeing things. And these weights, this neural network, they know everything. They basically they know how to draw all the celebrities.
and how to draw all artistic styles and all the plans and everything is in there. And it's just two gigabytes. You can download it. It's only two gigabytes. And it's like 80% of what your brain is doing is captured in these two gigabytes. And it's so much more than what a human brain could reproduce. It's absolutely brute forcing it. At the other time, two gigabytes doesn't seem to be a lot, which suggests that our own brain is probably not storing effectively much more information than a few gigabytes.
That's very humbling. And the reason why we can do so much more with it and so much faster than the AI is not because biological cells are so much more efficient than transistors. It is because they are self-organizing and have been at this game for quite some time and figured out a number of tricks that human engineers couldn't figure out so far. Right. Carl, do you want to expand on points of contention and the mortality and perhaps permanence of a cell?
I just wanted to celebrate this notion that the cell in a sense is immortal because of course the whole point of this is to try and understand
systems that endure over long periods of time. And that's what I meant by that. I didn't mean that death meant cessation. I just meant there's a certain life cycle, my tendency in play. So I thought that was nicely illustrated by the notion that the cell is in a sense unending. But the mortal immortality is more about
Divorcing the software from the substrate and there's a bit of a pushback that if you want to look for differences in the respective arguments then
A lot of people would say that all that housekeeping that goes on in terms of intracellular machinations and self-organization, that just is basal computation at a particular level and that more macroscopic kinds of belief updating and processing and computation supervene at a certain scale and indeed that progression in a sort of scale invariant sense is
It is one manifestation of what you were talking about before, that biological things are cells of cells of cells of cells and have increasingly higher kinds of scales and different kinds of computation. But the idea that the first principles apply at every and each level, and it's the same principle at every and at each level. And if you pursue that, one has to ask why modern AI or particularly machine learning is so inefficient.
Dangerously inefficient and there's I think a first principle account of that and the account would go along the following lines that the only objective function that you need to explain existence is a likelihood of you being your marginal likelihood
That statistically is the model evidence. The model evidence or the log of that evidence can always be written down as accuracy minus complexity. Therefore, to exist is to minimize complexity. Why is that important? Well, first of all, it means that that coarse grading that we were talking about earlier on is not a constraint. It is actually part of an existential imperative to coarse grade in the right kind of way.
The other reason it's important is that there is a thermodynamic link between the complexity scored in terms of belief updating or processing or computation and the thermodynamic cost. And if that's the case, it explains why the direction of travel in terms of your machine learning is so inefficient.
and what it tells you is there is a lower limit on the right way to do things. There is a lower limit on the thermodynamic efficiency and the information computational efficiency specified by the Landauer limit. Why does modern or current machine learning not get anywhere close to that Landauer limit?
The answer
is I think the von Neumann bottleneck. It is the memory wall. It is that people are trying to do computation in an immortal sense by running software without careful consideration of the substrate on which they're running or implementing that computation. So I would push back against the notion that it is even going to be possible irrespective of when it's the right direction of travel in terms of
Artificial
So it doesn't have to be a biological cell, but certainly has to conform to the same principles of multi-scale self-organization of the most efficient sort. That just is the optimization of the marginal likelihood or the evidence for the states that that particular computing device or computer wants to be in. So that's what I had a slight sort of
I don't think that's the right way to go about it. I would actually come back to your very initial argument, Joshua, that it has to be much more biologically inspired. It has to be much more biomimetic and part of that sort of inspiration
is the motivation for looking at the distinction between running of the immortal software on the human architectures, on Nvidia chips, relative to a much more biometric approach, say photonics or neuromorphic computing. I think that really does matter in terms of getting
Okay, let me push back against this.
First off, I do agree that current AI is brutalist in the sense that it is not making the best use of the available substrates and it's not building the best possible substrates. We have a number of path effects. It's not that the stuff that we are building and using is not clever or so, but it's a far cry from what biology seems to have discovered.
At the same time, there is relatively little funding going into AI and this relatively little energy consumption given what it gives you. If academics hear that it costs $20 million to train a model, they almost faint because they compare this with their departmental budget. But if you would compare this with the cost of making a halfway decent AI movie in Hollywood, it's negligible. So basically what goes into an AI project is far less
what goes into a Hollywood movie about AI. If you compare this at the scale, if you look at societal benefit of watching an AI movie or watching another blockbuster about the Titanic or so, it's not now, but I think that AI has the potential to be dramatically more valuable than this. I think that AI
Even though it might sound counterintuitive, it's not using a lot of energy and it's not very well funded at the moment still compared to what the value of it is. Also, the leading labs do not believe that the transformer is going to be the architecture that we have in the end. It just happens to be one of the very few things that currently works at scale that we have discovered that can actually be scaled up in this brutalist way. And it's already better at completing prompts than the average person.
and it's even better than writing code than many people. So it can translate between programming languages. You can write an algorithm down in English and it can even help you to write an algorithm down in English and then translate it in the programming language of your choice. And it's pretty good at it. It can also, if it makes a mistake and it often makes mistakes, understand the compiler messages and then try to suggest fixes that often work. In many ways, I've found that it's already better than a lot of people I've worked with in corporate contexts.
both at writing press releases and at writing code. It's not as good as the top level people in their field, but it's quite surprising. And so there is this interesting open and tantalizing question, can we scale this up by using slightly better loss function, by using slightly more compute, slightly more embedded curated data? And the systems can help us curating data and coming up with different architectures and so on to get this thing to be better at AI research than people.
If that gets better at AI research than people, then we can leave the rest to it and go to the beach. And it will come up with architectures and solutions that are much more efficient than what we have come up with. At the same time, there are many labs and teams that work on different hardware, that work on different algorithms at the same time.
The fact that you see so much news about the transformer at this point is not so much because everybody ignores it and doesn't work on it anymore or has religious beliefs and the transformer being the only good thing. It's because it's the thing that currently works so well. And people are trying to work on all the other things, but the thing that has the most economic impact and the most utility happens to be the stuff that currently works.
And so this made cloud our perception that we think it's the von Neumann architecture and so on. But in some sense, the GPU is no longer a von Neumann architecture. We have many pipelines that work in parallel that take in smaller chunks of memory that are more closely located to the local processor. And while it's not built in the same way as the brain, where all the memory is directly inside of the cell or its immediate vicinity,
It is much closer to it and it's able to emulate this. And if I look at the leading neuromorphic architectures, I can emulate them on a CPU and it's not slower. This is all stress research stuff that is early stage. But you're not emulating neuromorphic architectures on a CPU for the most part, which is GPU, which is largely because it doesn't give us that many benefits over the existing architectures and libraries.
or the existing architecture libraries work so well that people use this stuff for now and it creates a local bubble until somebody builds a new stack that is overtaking it. And I think this is all going to happen at some point. So I'm not that pessimistic about these effects. What I can see is that our computers can read text at a rate that is impossible for human beings. When you parse the data into a large language model for training it.
With this paradigm, it gets to be coherent in the limit. It's an interesting question. Maybe this paradigm is not correct.
Maybe humans are doing something different. Maybe humans are maximizing coherence or consistency, and we have a slightly different formal definition. Life on Earth or agency in the universe might be minimizing free energy in the limit, but individual organisms are not able to figure that out. They do something that is only approximating it, but locally works much better and converges much faster. Maybe there are different loss functions that we have yet to discover that are more biological or more similar to biological systems.
Also, one of the issues with biomimetic things is it means mostly mimicking the things that scientists in biology and neuroscience have discovered so far. And this stuff all doesn't work. The reason why Mike Levin doesn't call himself a neuroscientist, I suspect, but a synthetic biologist is that he doesn't want to get in conflict with the dogmatic approaches of some neuroscience, which believes that computation stops at the neurons.
It's only neurons that are involved in computing things.
It could be when you look at brains that they are basically telegraph networks of an organism, that the neuron is a telegraph cell. It's not unique in its ability to perform computation. It's only unique in its ability to send the results of computation using some kind of Morse code over long distances in the organism. And when you want to understand how organisms compute and you only look at neurons, it might be looking at the economy about 1900 and trying to understand it by only modeling the telegraph network.
But you're going to learn fascinating things by looking at an economy, looking at its telegraph network and looking at the Morse code, but thinking that communication can only happen in this Morse code rather than sending RNA molecules to your neighbors. Why would you want to send spike trains if you can send strings?
Why would you want to perform such computations in a slow, awkward way? Why would you want to translate information into the time domain if you can send it in parallel all at once? So when we talk about biomimetic, we often talk about emulating things that we only partially understand and that don't actually work in a simulation.
There is no working contact home right now that you can turn into a computer simulation, and that actually does what it's doing. And it's not because computers don't have the power to run the ideas that neuroscientists have developed, but neuroscientists don't have developed ideas that actually work. It's not that neuroscientists are stupid or their ideas are not promising. They're just incomplete at this point. We don't have complete models of brains that would work in AI.
The reason why AI has to reinvent things from scratch is because it takes an engineering perspective. It thinks about what would nature have to do in order to approximate this kind of function and what's the most straightforward way to implement this and test this theory. This is this experimental engineering perspective that I suspect we might also need in neuroscience.
not in the sense that we translate things into von Neumann architecture in neuroscience, but in the sense that we think about what would nature have to do in order to implement the necessary mathematics to model reality.
I largely agree entirely with many of those things. I'm just trying to remember the ones that I can argue with. I love this notion that there's more money going into Hollywood films about AI than actually AI research. I've never heard that before. That's marvelous.
and also the point about sort of GPUs. I think that's just a reflection of the natural selection in the AI community of what I was trying to say before about
the move away from von Neumann architectures to more mortal computing. If you talk to people doing in-memory processing or processing in memory as computer science, that's where they'd like everybody to be and that's what I meant really by
that aspect of mortal computing that the infrastructure and the buses and the message passing having everything local is speaking to the hardware implementation. I agree entirely that that is the direction of travel and I didn't want to imply that
sort of GPUs were the wrong way of doing it. Also, I agree, I wasn't really referring to transformer architectures and as you say, they're just highly expressive, very beautiful Bayesian filters and are now currently being understood as such. As my friend Chris Buckley would say, people are starting now to Bayes splayed how a transformer works.
What would I disagree with? I noticed that you on a number of occasions were trying to identify the universal objective function, doing things better.
Well, I think that ultimately utility relates to what makes the system stable and self-sustaining.
So if you look at any kind of agent, it depends on what conditions can stabilize that agent. And this comes down to very much the way in which you model reality, I think. So it is about minimizing free energy in a way. But if you look at our own lives and we look for a sandwich or for love or for a relationship or for having the right partner to have children with and so on, we're not thinking very much about minimizing free energy.
And we perform very local functions because we are only partial agents in a much larger system that you could understand as the project of the cell or as the project of building complexity to survive against the increasing entropy in the universe. And so basically we need to find sources of entropy and exploit them in a way that we can. And this depends on the agent that we currently are.
This narrows down this broader notion of the search for free energy into more practical and applicable and narrow things that can deviate locally very much from this pure, beautiful idea. With respect to principle that should be discovered or has to be discovered and might be discovered in the context of AI, I suspect that self-organizing systems need different algorithms.
the GPUs that we're currently using for learning because we cannot impose this global structure on them. So I suspect that there is a training algorithm that nature has discovered that is in plain sight and that we typically don't look at and that's consciousness. I suspect the reason why every human being is conscious and no human being is able to learn something without being conscious.
It's not producing complex behavior without being conscious. It's not so much because consciousness is super unique to humans and evolved at the pinnacle of evolution and got bestowed on us and us alone. We do not become conscious after the PhD. We become conscious before we can drag a finger. So I suspect that consciousness itself is an aspect or depending on how you define the term consciousness, the core of a meta-learning algorithm.
that allows the self-organization of information processing systems in nature. And it's a pretty radical notion. It's a conjecture at this point. I don't know whether that's true. But this idea that you have is a function that perceives itself in the act of perceiving. It's not conceptual. It's not cognitive. It's a precognitive level at the perceptual level where you notice that you're noticing, but you don't have a concept of notion yet.
And from this simple loop that keeps itself stable and is controlling itself to remain stable and remain observer, where the observer is constituting itself an observer. You build all the other functionality in your mind. You start imposing a general language on your substrate, a protocol that is distributed with words so the humans become trainable and learn to speak the same language, behave in the same way that every part of the mind is able to talk to all the other parts of the mind. And you can impose an organization that
removes inconsistencies. This is probably that thing that is one of the big differences between how biological systems learn and control the world and how artificial engineered systems do it. I agree entirely. You've brought so many bright and interesting ideas. It's difficult to know what to comment upon.
Just one thing which you said, when I pressed you on what is good, you basically said to survive. So I think that brings us again back to this notion of mortality being at the end of the day, the possibility of eluding mortality being part of… But not to survive as an individual, right? Human beings are built in such a way that we have to be mortal.
We are not designs that can adapt to changing circumstances. If the atmosphere changes, if our food supply changes too much, we need to build a different organism. We need to have children that mutate and get selected for these new circumstances. But in principle, intelligent design would be possible. It's just not possible with the present architecture because our minds are not complex enough to understand the information processing of the cell well enough to redesign the cell in situ.
And in principle, that's not something that would be impossible. It's just outside of the scope of biological minds so far. Right. Well, individually, we have to be mortal. But it's in principle, the cell can be immortal or there could be systems that go beyond the cell that encompass it, that are a superset of what the cell is doing and what other information processing agents could be doing in nature that basically makes sustainability happen.
And I think sustainability is a better notion in some sense than immortality. So, yeah, again, I agree entirely. I often look at the physics of self-organization as just a description of those things that have been successful in sustaining themselves. And indeed, the free entry principle is just basically what would that look like and how would you write that down?
And of course, the three energy theorists would argue that the ultimate, the only objective function is a measure of that sustainability that is the evidence that you're in your characteristics, ascendable states. So if properly deployed, you should be able to explain all of those aspects of behavior that characterize you and me in terms of self-evidencing or free energy minimization, such as choosing the right partner
such as
Only understandable in relation to some kind of selfhood and I'm using selfhood in the way I think you're using this basic notion of sentience. What would that mean from the point of view of the free energy principle? It would mean basically that you have an existential imperative to be curious. If you just read the
If I am choosing how to act next, then I'm going to choose those actions that minimize my expected surprise or resolve my uncertainty. I'm going to act as if I'm a curious thing and I bring that to the table because that is what is not an aspect of any of this
Artificial intelligence that you described before the machine that can translate from one language to another language the machine that can Map from some natural text to a beautiful graphic These are wonderful and beautiful creations and they are extremely entertaining but they are not curious and As such they do not comply with the free energy principle, which means that they're not sustainable
which means that one has to ask what's going to happen to them. Perhaps we might sustain them in the way that we do good art, but from the point of view of that kind, perhaps I shouldn't use the word biomimetic because perhaps that's too loaded, but the way of sustaining oneself through self-evidencing
I do not think does admit an intelligent design of something that is not of itself curious as part of itself organization. So where would you see curiosity as part? Does the FDA have to be curious? Is there any aspect of the utility afforded by say reinforcement learning models or deep RL or Bayesian RL? Does that have curiosity under the hood as part of the objective function?
I really liked how you bring art into this discussion as an example of something that might be similar to an AI system that doesn't know what it's good for and only exists because we sustain it, because it's not self-sustaining. ChetGPT is not paying its own energy bills. It doesn't really care about them. It's just a system that is completing text at this point. And it might be if you task it with this thing and it figures out the mathematics at some point, but right now it doesn't.
and an artist. I sometimes joke it's a system that has fallen in love with the shape of the loss function rather than with what you can achieve. Art is about capturing conscious states because they are intrinsically important. Is this art or can this be thrown away? It is art. It is important. And in this sense, art is the cuckoo child of life. It's not life itself.
The artists are elves. The living organisms are orcs. They only use art for status signaling or for education or for ornamentation. The artist is the one who thinks magic is important, building palaces in our minds, showing them to each other. That's what we do. I'm much more an artist at heart than I am a practical human being that maximizes utility and survival.
But I think I also can see that this is an incomplete perspective. It means that I'm identifying with a part of my mind, with the part of my mind that loves to observe and revamp the aesthetics of what I observe. I also realize that this is useful to society because it's basically able to hold down a particular corner of the larger hive mind that is necessary to be done. If I was somebody who would only maximize utility, it would be a great CEO, maybe.
but I would not be somebody who is able to tie down different parts of philosophy and see what I can see by combining them or by looking at them through a threat lens. And so it's sometimes okay that we pursue things without fully understanding what they're good for if we are part of a larger system that does that. And our own mind is made out of lots of sub behaviors that individually do not know what we are about.
and only together they complete each other to the point where we become a system that actually understands the purpose of our own existence in the world to some degree. And of course, that also goes across people. Individually, we are incomplete. And the reason why we have relationships to other people is because they complete us. And this incompleteness that we have individually is not just inadequacy, it's specialization.
The more difficulty we have to find a place in the world, the more incomplete we are. But it often also means we have more potential to do something in this area of specialization that we are in. And individually, it might be harder to find that right specialization, but to accept that individual minds are incomplete in the way in which they're implemented in biology, I think is an important insight.
And this doesn't have to be the case for an AI agent, of course, or for a God-like agent that holds down every fort, that is able to look at the world from every angle, that holds all perspectives simultaneously. Carl, did that answer your question about the curiosity of the FDA? Yes, and brings in the sort of primacy of the observer. So I've been intrigued by this notion of being incomplete.
Do you want to unpack that a little bit? Yes. First of all, Kurt, thanks for pointing out that I didn't talk about curiosity. Curiosity ties into this problem of exploration versus exploitation. The point of curiosity is to explore the unknown, to resolve uncertainties, to discover possibilities of what could also be and what we could also be doing. And this is in competition to executing on what we already know.
And there is, if you are in an unknown environment, it's unclear how much curiosity you should have, or if you're in a partially known environment. And nature seems to be solving this with diversity. So you have agents that are more curious and you have agents that are less curious. And depending on the current environment and niche, they are going to be adaptive or non-adaptive and being selected for or against. So I do think, of course, curiosity is super important, but it's also what kills the cat, right?
The early worm is the one that gets eaten by the bird. And so curiosity is important. It's a good thing that we are curious and it's very important that some of us are curious and retain this curiosity so we can move and change and adapt. And it's one of the most important properties in a mind that I value, that it's curious and always open to interaction and discovering ways to grow and become something else.
But it's risky to be too curious and instead not just exploiting what you already know and act on that and look for the simple solution for your problems.
I think it's a big problem in science that we drive out curiosity of people. The first step in thinking is curiosity, conjecture, trying things that may not work. And then you contract and the PhD seems to be a great filter that drives out the curiosity out of people. And then after that, they're able to only solve problems using given methods and they can do this to themselves. This is a violation of your curious mind. But as the existential question somehow stop after graduation.
So it seems to be some selection function against thinking that is happening, that is largely driving curiosity out of people because they feel they can no longer afford it between grant proposals. And so in a sense, yes, I would like to express how much I cherish curiosity and its importance while pointing at the reason why not everybody is curious all the time and too much of a good thing is also bad. Right. And the incompleteness now.
Would it be possible for you to expand
on the early worm gets eaten by the bird because the phrase is that the early bird gets the worm. But that doesn't imply that the early worm gets eaten by the bird because they could have different overlapping schedules. And in fact, it could be the late worm that gets eaten. And there is such a thing as a first mover advantage. AltaVista got eaten by Google because instead of giving people the search results it wanted, it gave them ads. And now Google has discovered that it's much better to be AltaVista. But AltaVista got eaten by Google. It was too early.
Google has now given up on search. It instead believes in just giving you a mixture of ads that rhyme on your search term. You could say that AltaVista was the early worm just trying to do a job on my frustration with Google, but I think that very often we find that the earliest attempts to do something cannot survive because the environment is not there yet. The pioneers are usually sacrificial.
There is glory in being a pioneer. There is no glory in copying what worked for the pioneer, but there is very little upside in greatness. Understood, Carl. Greatness, which is not good. Greatness is not good. We come back to the tumor again. The art of good management.
riffing on your focus on art and just thinking what makes a good CEO? Is he somebody who makes lots of money and is utilitarian or does he have the art of good management and considers the objective function, the sustainability of his institution and her institution and all the people that work for it? I think there are very different perspectives on what this objective function should be and I was trying to argue before that it's not
It can't be measured in terms of greatness or money or utility. It can only be measured in terms of sustainability. The other thing I like was curiosity. So here's my little take on that. Curiosity killed a cat.
I think that is exactly what was being implied by the importance of mortal computation and that in a sense we all die as a result of being curious after a sufficient amount of time and it can be no other way. I mean that in a very technical sense.
So if you were talking to aficionados and active inference, an application of the free energy principle, what they would say is that in acting, in dissolving the exploration exploitation dilemma, you have to put curiosity as an essential part of the right objective function that underwrites our decisions about choices and our actions, simply in the sense that the
The expected surprise or the expected log evidence or self-information can always be written down as expected information gain and your expected utility or negative cost, which means that just the statistics of self-organization
Bake in curiosity in the sense that you will choose those actions that resolve uncertainty. You choose those actions that have the greatest information gain. So curiosity, I think, is a necessary part of existing. There's certainly things that exist in a sustainable sense. But my question was, I want to know more about this intriguing notion that we are incomplete
and less considered in the context of other things like us that constitute our lived or at least sensed world. But I just wanted to also ask do you see curiosity as being necessary for that kind of consciousness that you associated with sentience before? Would it be possible to be conscious
Without being curious, acknowledging there are lots of things that are not curious, you know, viruses, I suspect, are not curious. Trees are probably not that curious. They don't plan their actions to resolve uncertainty. But there are certain things that are curious, things like you and me. So I'm just wondering whether there is some, there are different kinds of things, some of which are more elaborate in terms of the kind of self-evidencing that they
I think that a good team should also contain curiosity maximizers
People that mostly are driven by curiosity. And so you have a voice in your team. And I love being that voice that is driven by finding out what could be. And you also need people who focus on execution and who are not curious at all. And in this way, I think we can be productively incomplete.
If you have somebody who is by nature not very curious, but is able to accept the value of somebody who is and vice versa, we can become specialists at being curious or at execution. And when we can inform and advise each other, we can be much better than we could be individually if we would try to do all those things simultaneously. And in this sense, I believe that if you are a state building species, you do benefit from this kind of diversity.
If you're not an individual agent that has to do all the things simultaneously. I don't know how curious trees are. I'm somewhat agnostic with respect to this. I suspect that they also need to reduce uncertainty. And I don't know how smart trees can become. When I look at means and motive of individual cells, they can exchange messages to their neighbors, right? They can also make this conditional evolution is probably getting them to the point where they can learn
So I don't see a way to stop a large multicellular organism that becomes old enough to become somewhat brain-like. But if it has neurons, it cannot send information quickly over long distances. So it will take a very long time compared to a brain or nervous system for a tree to become coherent about any observation. It takes so much time to synchronize this information back and forth that the tree would observe locally.
And as a result, I would expect that the mental activities of the tree, if they exist, which I don't know, to play out at such slow time scales that it's very hard for us to observe. And so what does it look like if a tree was sentient? How would it look different from what we already observe and know? You notice that trees are communicating with other trees, that they sometimes kill plants around them, that they make decisions about that.
We know that there are networks between fungi and trees that seem to be sending information over longer distances in forests. So trees can prepare an immune response to pests that invade the forest from one end while they're sitting on another end. And we observe all this, but we don't really think about the implication. What is the limitation of the sentience of a forest? I don't know what that is.
And I'm really undecided about it, but I don't see a way to instantly dismiss the idea that trees could be quite curious and could actually at some level reason about the world, but probably because they're so slow that the individual tree doesn't get much smarter than a mouse because the amount of training data that the tree is able to process in its lifetime at a similar resolution is going to be much lower.
They do live a long time. I have many friends who you would enjoy talking to about that and you seem very informed in that sphere. Our ancestors were convinced that trees could sink.
Fairies are the spirits of trees and they move around in the forest using the internet of the forest that has emerged over many generations of plants that have learned to speak a shared protocol.
And I think it's a very intriguing idea. We should at least consider it as a hypothesis. Absolutely. There was a great BBC series where they focus on the secret life of plants is by speeding up things 10 or 100 times. And they look very sentient when you do that. Our ancestors said that one day in fairyland is seven years in human land. Maybe this alludes to this temporal difference. So about differences between you all.
Why don't we linger on consciousness? And Carl, if you don't mind answering, what is consciousness? Where is consciousness and why is consciousness? So in other words, where is it? Is it in the brain? Is it in the entire body? Is it an ill-defined question? What is it? Why do we have it? What is its function? And then we'll see where this compares and contrasts with Yoshi's thinking. Right.
Yeah, I am not a philosopher and sometimes the story I will tell depends on what who I am talking to. But at its simplest, I find it easiest to think of consciousness as a process as opposed to a thing or a state and specifically a process of computation, if you like, or belief updating. So I normally start thinking about questions of the kind you just asked me.
But replacing consciousness with evolution. So where is evolution? What is evolution? Why is evolution? Then all of those questions I think are quite easy to answer. Sometimes it's a stupid question. Sometimes there's a very clear answer. So where is consciousness? Where is evolution? Well, it is in the substrate that is evolving. So where is consciousness? It will be in the
the information processing, the belief updating that you get at any level.
Acknowledging Joshua's point that it doesn't have to be neurons. It could be micelle networks. It could be intercellular communication. It could be electrical filaments. There is a physical instantiation of a process that can be read as a kind of
Belief Updating or Processing. If I were allowed to read computation as that kind of process, then that would be, I think, where consciousness would be found. Would that be sufficient to ascribe
Consciousness to me or to something else. I suspect not. I think you'd have to go a little bit further and Suspect the church wants to articulate how much further but there will be a focus on self modeling so it's not just a process of inference. It's actually inference under
I put something else into the mix as well. To be conscious, I suspect in the way that you're talking about,
Means you have to be an agent and to be an agent means that you have to be able to act and I would say more than just acting, more than acting say for example in the way that plants will act to broadcast information that enables them to mount an immune response to parasites. They have the capacity to plan
We normally plan our day and the way that we spend our time gathering information, gathering evidence from models of the world.
in a way that can only be described as looks as if it is curious. That's why I was so fixated on art and creativity and curiosity that Josh was talking about previously. I think that is probably a prerequisite for being conscious in the sense that Josh
May I ask you a clarifying question, Carl, about belief updating? So if consciousness is associated with belief updating, then let's say one is a computer, a classical computer, you get updated in discrete steps. Whereas the belief updating that I imagine you're referring to is something more fuzzy or continuous. So does that mean that the consciousness associated with the computer, if a computer could be conscious is of a different sort? How does that work?
I'm not sure. I don't think there's any, in the same spirit that we don't want to over commit to neurons doing mind work. I don't think we need to commit to a continuous or discrete space-time formulation. Again, that's an artificial divide between classical physics and quantum information theoretic approaches.
I think the deep question is what properties must the computational process in a PC or a computer possess before you would be licensed to make the influence that it was conscious and possibly even ascribe self-consciousness to that. The way that I would articulate that would be that you have to be able to describe everything that is observable about that computing
as if or explain it in terms of it acting upon the world in a way that suggests or can be explained that it has a model of itself engaging with that world. And furthermore, I would say that that model has to involve the consequences of its action, which is what I meant by being an agent.
So it has to have a model or act as if it has a model, a generative model that could be a minimal self kind of model but crucially entails the consequences of its own actions so that it can plan, so that it can evince curious like behavior. So that could be done in silica, it could be done with a sort of clock and
I think it's more the nature of the implicit model under the hood
that is accounting for its internal machinations, but more practically in terms of what I can observe at that computer, its behavior and the way that it goes and gathers information or attends to certain things and doesn't attend to other things. Okay, great. Josje. If we think about where consciousness is, we might be biased by our propensity to assign identity.
to everything. And identity does not apply to law-like things. Gravity is not somewhere. Gravity is a law, for instance, or combustion is not anywhere. It's a law. It doesn't mean that it happens everywhere in the same way. It only happens when the conditions for the manifestation of the law are implemented. When they're realized in a certain region, then we can observe combustion happening. But combustion simply means that under certain conditions, you will get an exothermic reaction.
And gravity means that under certain conditions, you will find that objects attract each other. And consciousness means that if you set up a system in a certain way, you will observe the following phenomena.
Consciousness in this way is software states, representation state, and all software is not a thing. But the word processor that runs on your computer doesn't have an identity that would make it separate or the same as the word processor that runs on another person's computer because it's a law. It says if you put the transistors into this in this state, the following thing is going to happen. So a software engineer is discovering a law, a very, very specific law.
that is tailored to a particular task and so on, but it's manifested whenever we create the preconditions for that law. And so the software design is about creating the preconditions for the manifestation of a law of text processing, for instance.
that allows you to implement such a function in the universe or discover how it is implemented. But it's not because the software engineer builds it into existence and it didn't exist before that. That's not the case. It always would work. If somebody discovers this bit string in a random way and it's the same bit string implemented on the same architecture, it would still perform the same function. And in a sense, I think that consciousness is
not separate in different people. It's itself a mechanism, a principle that increases coherence in the mind. It's an operator that seems to be creating coherence. At least that's the way I would look at it or frame it. And as a result, it produces a sense of now, an island of coherence and the potential models that our mind could have. And I think it's responsible for this fact that we perceive ourselves being inhabitants of an island of coherence in the chaotic world.
Now this island of known as and it's probably not the only solution for this thing. I think it's imaginable that there could be a hyper consciousness that allows you to see multiple possibilities simultaneously rather than just one as our consciousness does or that offers us a now that is not three seconds long, but hundreds of years long.
In principle, that I think is conceivable. So maybe we will have systems at some point, but we already have them that have different consciousness like structures that fulfill a similar role of islands of coherence or intertidal regions of in the space of representations that allow you to act on the universe. But the way it seems to be implemented in myself, it's particularly in the brain, because if I disrupt my brain, my consciousness ceases.
Whereas if I disrupt my body, it doesn't. This doesn't mean that there are no feedback loops that are bi-directional into my body or even outside of my body that are crucial for some functionality that I observe as a content in my consciousness. But if you want to make me unconscious, you need to clobber my brain in some sense, not nothing else. There's no other part of the universe that you can inhibit to make me unconscious. And that leads me to think that the way in which this law-like structure is implemented is right now.
For the system that is talking to you on my neurons, on my brain, mostly. Okay. Any objections there, Carl? No, not at all. I was just trying to remember if Mark Sausage were here, he'd tell you exactly
the size of a really small region in the brain stem i think it's less than four cubic millimeters if you were bladed you would immediately lose consciousness like that so you don't know it's a very very specific part of your neural architecture
I'm
that enable a large scale functionality. In some sense, everything that would disrupt the formation of coherent patterns in my brain is sufficient to inhibit my consciousness. And there are probably many such bottlenecks that provide the vulnerability. So maybe the claustrum is crucial in providing some clock function that is crucial for the formation of feedback loops in the brain that give rise to the kind of patterns that we need. Maybe there are several other such bottlenecks.
This doesn't mean that the functionality is exclusively implemented in this bottleneck. No, I didn't mean to imply that the pineal gland is this thing. I didn't think that you would, but I thought it would lead to a misunderstanding of the audience and I've heard famous neuroscientists point at such phenomena and say, oh, maybe this is where consciousness happens.
So which neuroscientist has said this then?
Just to unpack, the reason that Mark would identify this is exactly the cells of origin that are broadcast everywhere that do induce exactly this coherence you were talking about. These are the ascending modulator neurotransmitter systems that are responsible for orchestrating that coherence that you were talking about. And I think that's very nice because, you know,
It also speaks to the ability of conscious-mimicking-like artifacts whose abilities to mimic consciousness-like behavior rests upon this modulatory attention-like mechanism. I'm thinking again of attention heads and transformers that play the same kind of role as the
the selection that these ascending neurotransmitter systems do. So if you find yourself in conversation with Mark Soames, he would argue that the feeling of consciousness arises from equipping certain coherent coordinated interactions that may be regulated by the Cerebellum or the claustrum, but it is that regulation that actually equips consciousness with the kind of qualitative feeling that leads in the way that Mark Soames addresses.
but I mean just notice that just reviewing what Josh just said that he's talking about consciousness you're equipping us with a sense of now and having an explicit aspect that could be you know I'm thinking of not hustle but
We're talking about processes in time.
At this instant, I am conscious, or consciousness is here. We're talking about a process that by definition has to unfold in time. I think that's an important observation which sometimes eludes, I think, people debating about conscious states and conscious content, not acknowledging that it is a process.
I have an open question and maybe you have a reflection on this.
When we think about our own consciousness, we cannot know in principle, I think, just by introspection, whether we have multiple consciousness in our own mind, because we can only remember those conscious states that find their way into an integrated protocol that you can access from where you stand. We know that there are some people which have a multiple personality disorder in which the protocol itself gets splintered.
As a result, they don't dream to be just one person. They dream to be alternating, to be different people that usually don't remember each other because they don't have their shared protocol anymore. Now, my own emotion and perception is generated outside of my personal self. My personal self is downstream from them. I am subjected to my perception and emotion. I have involuntary reactions to them. But to produce my percepts and my emotion, my mind needs intelligence.
It cannot be much more stupid than me. If my emotions would guide me in a way that is consistently more stupid than my reason and my reflection would be, I don't think I would work. So there is an interesting question. Is there a secondary consciousness? Is the part of your mind that generates world model and your self assessment, your alignment to the world itself conscious? So basically, do you share your brain with the second consciousness that has a separate protocol?
or is this a non-conscious process that is basically just dumb and doesn't know what it's doing in a sense that it's would be sentient in a way that's similar to my own sentence. What do you think? You should have a go on that one and then I'll I can think about it. Well, something I had wondered about 10 years ago or so, and I don't recall the exact argument was that if it was the case that the graph
in our brain, let's say that let's just reduce the neurons down to a graph, that this graph somehow produces consciousness or is the same as consciousness, then if you were to remove one of those nodes, then you would still have a somewhat the same identity. Okay, so then does that mean that we have pretty much an infinite amount of overlapping consciousnesses within our brain? I don't recall the exact argument, but it was similar to this. And then there's something related in philosophy called the binding problem. I'm uncertain what people who
Can I just then pursue that notion of the binding in the context of the kind of thing or the way I am at the moment? I think that's a very compelling notion.
From the point of view of generative modeling, so I'm not answering now as a philosopher, but as somebody who may be tasked, for example, with building an artifact that would have a minimal kind of selfhood, the first thing you have to write down
is different states of mind so that I can be frightened, I can be embarrassed, I can be angry, I can be a father, I could be a football player. So all the different ways that I could be that are now conditioned upon the context in which I find myself.
If that's part of the generative model, that then speaks to two things. First of all, you have to recognize what state of mind you are in, given all the evidence at hand. For example, if I want to jointly explain the racing heart that my interceptor cues are providing me in the intercepted domain,
with a stiffness of my muscles that my proprioception is equipping me with. To reconcile that with my visual exosusceptive input that I'm in a dark alley and mnemonically I've never been here before. All of this sensory evidence might be quite easily explained by the simple hypothesis I am frightened and that in turn generates
Covert or mental actions and possibly even overt autonomic actions and motor actions that Provide more evidence for the fact that I am frightened in the sense in a William James sense that you know, I have cardiac acceleration I will have a motor response a muscular response appropriate for a fright and flight flight response so just to actually be able to generate and recognize
emotional kinds of behavior. I would need to have a minimal kind of model that crucially obliged me now to disambiguate between a series of different ways of being.
You know, so it's not so much, oh, I am me. That's a great hypothesis. That explains everything. But to make it operationally important, I have to actually infer I'm me in this kind of state of mind, this situation, or I'm me in this kind of situation and select the right state of mind to be in. And I think that that really does speak to this sort of notion of multiple consciousnesses that
I'm constantly seeking for the way in which I complete you in terms of dyadic interactions, which means I have to recognize what kind of person do you expect me to be in this setting?
And of course I can only do that if I actually have an internal model that is about me. It's a model that actually have this attribute of selfhood, but specifically selfhood appropriate to this context or this person or this situation. Does that make sense?
Yeah, I have a question about that. You said that you have different identities that you then select from to see which one's most appropriate for the circumstance, like a hypothesis. And is it the case then that you would say that there are multiple consciousnesses inside your brain? Or is it more like you have multiple potential consciousnesses, and then as soon as you select one, that makes it actual? Think Verizon, the best 5G network is expensive? Think again, bring in your AT&T or T-Mobile bill to a Verizon store
. . . . . . . . . .
I don't know though. I would imagine that you'd have to have another deeper layer of your generative model that then recognizes the selection process and indeed this may sound fanciful but there are naturalized in terms of inference schemes
Models of consciousness that actually do invoke, and I'm thinking here of the work of people like Lars Sandstedt-Smith, who explicitly have three levels, and each level, a deep journey model, very much like a deep neural network, and the role of each level is to provide the right attention heads or biasing or precision
So it may well be that to get the kind of self-awareness, if I now read awareness as deploying mental action in the service of setting the precision or the gating of various communications or processing lower down in the model, it may well be that you do need another layer of sophistication or depth to your generative models that I suspect trees don't have
But certainly you have, or I can infer that you have, given I'm assuming that I have a similar conception of consciousness. But I'm not sure that really speaks to your question or the one that Joshua was posing, that the unitary aspect of consciousness and does that transcend
an inference that would simply be biophysically instantiated in exactly the same way that I can register visual motion and motion sensitive area V5 in my posterior cortex. I don't know about that. I'll pass back to Josh on that one. Again, we need a very narrow definition, the very tight definition of consciousness to answer this question in a meaningful way.
If we see consciousness as something that we vaguely gesture at, and there could be multiple things in our understanding, and it becomes almost impossible to say something meaningful about this. So, for instance, it is conceivable that consciousness would be implemented by a small set of circuits in the brain, and that all the different contents that can experience themselves as consciousness are repurposing this shared functionality.
We probably have only one language center and this one language center can be used to articulate ideas. So many parts of our mind using different sub agents that basically interface with this. You can also clearly have multiple selves interacting on your mind. Your personal self is one possible self that you can have that represents you as a person.
But there are some people which have God talking to them on their own mind. And I think what happens there is people implement a self that is existing and self identifying as existing across minds, something that is not a model of the interests of the individual person, but a model of a collective agent that is implemented using the actions of the individual people. But of course, this collective mind that assumes the voice of God and talks to you in your own mind so you can perceive it.
Is still implemented on your own mind and uses your circuitry. It's just that your circuitry is not yours. Your brain doesn't belong to yourself. Yourself is a creation of your own mind that symbolizes this person. People who don't say that God doesn't exist, forget that often that they themselves themselves don't really exist in physics.
This thing that I experienced as perceiving, as interacting with the world is a dream. It's a dream of what it would be like if you were a person that existed. It's virtual, right? So you can also dream being a God and this God might be so tightly implemented on your mind that it's able to use your language center and you hear its voice talking to you. But it's not more or less real than you hearing your own voice talking to you in your mind, right? It's just an implementation of a representation of agency in your mind.
One crucial difference to the way in which most AI systems are implemented right now and the way in which agency is implemented on our minds is that we usually write functions in AI that perform something like a hundred steps in a neural network, for instance, and then gives a result that makes the programmer happy. And this is it. And the time series predictions in our own mind are dynamic. They're not meant to solve a particular function.
but they're meant to track reality. So in a sense, our brain is more like a very complex resonator that tries to go into resonance with the world. So it creates a harmonic pattern that continuously tracks your sensory data with the minimal amount of effort. And this perspective is very different. Really means your perception of the world cannot afford to deviate too much in its dynamics from the dynamics that you observe in your sensory apparatus, because otherwise future predictions become harder. You get out of sync.
You always try to stay in sync with the world. And this thing that you stay in sync is really crucial for the way in which we experience ourselves with the world. As part of staying in sync, we discover our own self is the missing link between volition and the outcomes of our action. Our body would not be discoverable to us and is not immediately given to us if we wouldn't have this loop that ties us into the outer universe and into the stuff that we cannot control directly.
And for me, this question relates to, do we have only one consciousness? It occurs to me that we would not know if we have multiple ones, if they don't share memories. If I were to set up an AI architecture, where a part of the AI architecture is a model of an agent in the world. Another part of the AI architecture is a model of the infrastructure that I need to maintain to make a model of the world and such an agent in the world.
I would not tell the agent how this infrastructure works, because the agent might use that knowledge to game the architecture and get a better outcome for itself, not the organism. Imagine you could game your perception so you're always happy, no matter how much you're failing in the world. From the perspective of the larger architecture, that's not desirable. So it would probably remain hidden from you how you implement it. And to me, the question is interesting. How sentient is this part of you that is not yourself?
Does it actually know what it is in real time? I think it's a very interesting and tempting philosophical question and also a practical one. Maybe there is a neuroscientific experiment that would figure out if you have two clusters of conscious experience. I don't know, wouldn't know how to measure this, but right, maybe IIT Global Workspace Theory and so on are more interesting ways than we currently think they are.
Because they assume that there is just one consciousness. Of course, from the perspective of one consciousness, there is only one, because consciousness is in some sense, by definition, what's unified. But if there are multiple clusters of unification that exist simultaneously, they wouldn't know each other directly. They could maybe observe each other, but maybe not in both directions.
Sorry, when you say consciousness is by definition one, is that akin to how you say software is one software as such, but specific instantiations of functional weight. So basically the most more like the universe is by definition only one.
But you can have multiple universes, but this means that we define universe in a particular way. Normally, universe is used in the way of everything that feeds back information into a unified way, into a unified thing. We accept that parts of the universe get lost if they go outside of the distance where they can feed information back into you. But there's still in the way in which we think about the universe part of the universe. The universe is everything that exists. And consciousness is everything that you can be conscious of in the sense.
Right. So if there is stuff in you that you're not conscious of, it doesn't mean that it's not conscious. It would just be a separate consciousness, possibly. It could also be that it's not a consciousness. And so what I don't know is, is the brain structured in such a way that can maintain only one consciousness at a time? Or could there be multiple full on consciousnesses that just don't, where we don't know about the other one? I perceive my consciousness as being focused on this content that is my personal self.
I can have conscious states in which I'm not a personal self. For instance, I can dream at night that there's stuff happening, and I'm conscious of that stuff happening, but there is no I. There is no personal self. There's just this reflexive attention that is interacting with the perceptual world. In that state, I would say I can clearly show that consciousness can exist without a personal self, and the personal self is just a content. But it doesn't answer the question, are there multiple consciousnesses?
Interacting on my brain one that is maintaining my reward system and motivational system and my perception and one that is maintaining my personal self. Carl now that we've spoken about the unity of consciousness dissociation as well as even voices of God and God him or herself or itself. What does your background in schizophrenia your perspective from there have to say?
Yeah, well, that's a brilliant question and a leading question. It's what I wanted to comment on. So again, so many things have been have been unearthed here from the basic that all our beliefs, our fantasies, their hypotheses, illusions that are entrained by
the sensorium in a way that maintains some kind of synchrony between the inside and the outside. I think that's quite a fundamental thing which we haven't spoken about very much but I just want to fully endorse and of course that entrainment sometimes referred to I think as entrained hallucination, perception being hallucination that's just been entrained by sparse data.
but the data themselves being actively sampled. So this loop that Josje was referring to, I think is absolutely crucial aspect of the whole sense making and indeed sense making as a self, as a cause of my own or the author of my own sensations in an active sensing or active inference. I think that's absolutely crucial. The question about are the multiple consciousnesses, I should just
Before addressing the psychiatric perspective, I have a group of colleagues including people like Maxwell Ramsted and Chris Fields and particularly Chris Fields who takes the quantum information theoretic view of this and brings to the table the notion of an irreducible Markov blanket in a computing graph that crucially has some unique properties that means that
It can only know of itself by acting on the outside or which other parts of the brain and again acting in this instance just means setting the attention or the coordination or contextualizing message passing elsewhere but the interesting notion
which is not unrelated to the pineal gland or Mark Soames' ascending neurotransmitter systems that might do this kind of action, is that there could be more than one minimal or irreducible Markov blanket that practically you can actually experimentally define in principle by looking at the connectivity of any kind. But certainly if you have sufficiently
Detailed Connectome
to actually identify candidates for irreducible Markov blankets that could be the thing that looks at the thing that's doing the thing that may have different kinds of experiences. There could be an irreducible Markov blanket
in say the Globus Pallidus that might be making sense of and acting upon the machinery that underwrites our motor behavior and our plans and our choices as opposed to something in the occipital lobe that might be more to do with perception. So I'm just saying that I don't think it's a silly question to ask. Can we empirically identify
candidates in computing architectures that would have the right kind of attributes that would be necessary to ascribe them some minimal kind of consciousness. But let me return to this key question about schizophrenia because as Joshua was talking, it did strike me. Yes, that's exactly what goes wrong in schizophrenia. Attribution of agency.
delusions of control hearing your hearing voice again coming back to this notion that you know this action perception loop this this circular coupling to the world that rests upon action that has an agent and the consciousness understood as self modeling is all about
Ascribing the right agency to the outcomes of action I think is a really important notion here and it can go horribly wrong. We spend the first years of our lives working out how I cause that and you cause that and working out what I can cause and what I can't cause and what Mum causes and what other people cause. Imagine that you lost that capacity. Imagine that when you spoke
This is Chris Frith's notion or expression for auditory hallucinations, for example. You won't help to recognize that it was you that was the initiation of that speech act, whether it's actually articulated or subvocal. So just not being able to infer
Selfhood in the sense of ascribing agency to the concept the sensed consequences of action would be quite devastating. And of course you can think about reproducing these kinds of states with certain psychomimetic or psychedelic drugs. They really dissolve what we take for granted in terms of a coherent unitary content of consciousness. If you've ever had the synesthesia
the fact that color is seen and sound is heard
It doesn't have to be like that. You know, it's just that if we as sustained, inferring processes, self-evidence in computing processes that sustain in a coherent way our sense making, it looks as if colors are seen and sounds are heard. That's how that's how we make sense. It doesn't have to be like that. And you can experience the converse. You can start to see sounds. You can hear colors. You can have
A moment can actually feel as if you've nested.
that you're given the right either psychopathology or pathophysiology, technically a synaptopathy of the kind you might associate with things like Parkinson's disease and schizophrenia, possibly even neurotic disorders, absolutely effective or depressive or generalized anxiety disorders can all be understood as basically a disintegration of this coherent
Synthesis and, to use your word, the binding, which means that I think the same principles could also be ascribed to consciousness itself. So depersonalization and derealization are two conditions which I've never experienced, but my understanding of
There are depersonalization syndromes where you still sense, you still perceive, but it's not you.
And there are some derealization syndromes where you are there, but all your sensorium is unreal. It's not actually there anymore. You're not actually in the world. So you can get these horrible disintegration dissociative. Well, dissociative is a political term. You can get these situations where everything we take for granted about the unitary aspect of our experienced world
I would distinguish between consciousness and self more closely than you seem to be doing just now. I would say that consciousness coincides with the ability to dream or it is the ability to dream even. And in schizophrenia, the dream is spinning off from
the tightly coupled model that allows you to track reality. And when we dream at night, we are dissociated from our sensorium and the brain is probably also dissociated in many other ways. And as a result, we get split off from the ability, for instance, to remember who we are, which city we live in, what our name is very often in a dream. Even if it's a lucid dream where we get some agency over the contents of our dream, we might not be able to reconstruct our normal personality and crucial aspects of our own self.
And in schizophrenia, I think this happens while we are awake, which means we start to produce mental representations that look real to us, but that have no longer the property that they are predicting what's going to happen next in the world or much later. And this ability to lose predictive power doesn't mean that they are now
more of an illusion than before. The normal stuff that has predictive power is still hallucination. It's still the trance state when you perceive something as real, as long as you perceive it as real. It's only some trance states are useful in the sense that they have predictive power, that they're useful representations, and others are not.
And the ability to wake up from this notion that your representations are real is what Michael Taft calls enlightenment. He's a meditation teacher who has a pretty rational approach to enlightenment. And basically to him, enlightenment is the state where you recognize all your mental representations as representations and become aware of their representational nature. You basically realize that nothing that you can perceive is real, because everything that you can perceive is a representational content.
And it's something that is accessible to your introspection via introspection if you build the necessary models for doing that. So when your mind is getting to this model level where you can construct a representation of how you're representing things, then you get some agency of how you are interacting with your representation. But I wouldn't say that somebody who is experiencing
a schizophrenic episode and or a derealizes or depersonalizes is losing consciousness. They are losing the self. They're losing coherence. They're losing the ability to track reality and the interaction between self and external world and so on. But as long as they experience that happening, they're still conscious.
Does this make sense?
We met a few times since I left Germany only, mostly online. And I like Thomas a lot. I think that he is one of the few German philosophers who is reading right now. But of course, he's limited by being a philosopher, which means he is going to the point where he stops before the point where he would make actually functional models that we could test.
Right. So I think his concepts are sound. He does observe a lot of interesting things and I guess a lot of it also through introspection. But I think in order to understand consciousness, we actually need to build testable theories. And I suspect even if you cannot construct consciousness as this strange loop as Hofstadter calls it from scratch, which I don't know whether we can do that. I'm
I was going to make the joke that we've offended physicists, neuroscientists and philosophers. It's mostly retaliation because I'm so offended by them. Maybe I shouldn't.
I tried to study all these things and I got so little out of it. I found that most of it is just pretence. There's so little honest thinking going on about the condition that we are in. It was very frustrating to me. What field do you identify as being a part of, Joscha? Computer scientist? I like computer science most because I've discovered as a student that you can publish in computer science at every stage of your career.
You can be a first semester student and you can publish in computer science because the criteria of validity are not human criteria. The stuff either works or it doesn't. Your proof either pans out or it doesn't. Whereas the criteria in philosophy are to a much larger degree social criteria. So the more your peers influence the outcome of the review and the more your peers can deviate from the actual mission of your subject in the social dynamics, the more haphazard your field becomes.
And so we noticed, for instance, in psychology, we had this big replication crisis, and the replication crisis in psychology was something that was anticipated by a number of psychologists for many, many years that pointed out this curious fact that psychology seems to be the only science where you make a prediction at the beginning of your paper and it always comes true in the end. Enormous predictive power. And also pointed at all the ways in which p-hacking was accepted and legal and how poorly the statistical tools were understood.
And then we have this replication crisis and 15,000 studies get invalidated more or less or no longer reliable. And somebody pointed this out in a beautiful text where they said, essentially what's happening here is that we have an airplane crash and you hear that 15,000 of your loved ones have died.
And nobody even goes to the trouble to ID them, because nobody cares, because nothing is changing as a result of these invalidated studies, right? What kind of the building has just toppled? Nobody cares. There's not actually a building. There's just people talking. And when this happens, we have to be brutally honest, I think, as a field. Also, I hear very often that AI has been inspired by newer science and learned so much from it. But when I look at the actual algorithms, the last big influence was heavier learning.
And the other stuff is just people talking, taking inspiration, taking insights and so on. But it's not actually there is a lot of stuff that you can take out of the formalisms of people who studied the brain and directly translate it. I think that even what Carl is doing is much more results of information theory and physics that is congruent with information theory because it's thinking about similar phenomena using similar mathematical tools and then expresses it with more Greek letters than the computer scientists used to do.
But there is a big overlap in this. And so I think the separation between intellectual traditions and fields and disciplines is something that we should probably overcome. We should also probably in an age of AI, rethink the way in which we publish and think, right? Is the paper actually the contribution that we want to make in the future in the time that you can ask your LLM to generate the paper? It's maybe it's the building block, the knowledge item, the argument that is going to be the major contribution
that the scientists or the team has to make the experiment. And then you have systems that automatically synthesize this into answers to the questions that you have and you want to do something in a particular kind of context. But this will completely change the way in which we evaluate value in the scientific institutions at the moment. And nobody knows what this is going to look like.
Imagine we use an LLM to read a scientific paper and we parse out all the sources of the scientific paper from the paper and what the sources are meant to argue for. And then we automatically read all the sources and check whether they actually say that what the paper is claiming the sources say. And we parse through the entire trees of discipline in this way until we get to first principles. What are we going to find? Which parts of science will hold up?
I think that we might be at the doorstep of a choice between a scientific revolution in which science becomes radically honest and changes the way it works or in which it reveals itself as an employment program, as fake jobs for people who couldn't find a job in the real economy and basically get away because their peers let them get away with it.
And I try to be as pointedly as possible and as bleak as possible. So science, given its incentives that it's working under and the institutional route that has set in after decades of postmodernism, it's surprisingly good stuff. There's so many good scientists in all fields that I know. But I also noticed that many of the disciplines don't seem to be making a lot of progress for the questions that we have. And many fields seem to be stuck.
This doesn't seem to be just because all the low-hanging fruits are wrapped, but I think it's also because the way in which scientific institutions' works have changed. The notion of peer review probably didn't exist very much before the 1970s. This idea that you get truth by looking at a peer-reviewed study rather than asking a person who is able to read and write such studies. That is new. That is something that didn't exist for Einstein.
So I don't know if this means that Einstein was an unscientific mind that was only successful because he was working at the beginning of a discipline, or it was because he was thinking at a completely different paradigm. But no matter what, I think that AI is going to have the potential to change the paradigm massively. And I don't know which way, but I can't wait. So now that we're talking about computer scientists,
What do you make of the debacle at OpenAI? Both Carl and Joscha, directed to you, Joscha first. There's relatively little I can say because I don't actually know what the reason was for the decision of the board to fire the CEO. Firing the CEO is one of the very few moves beyond providing advice than the board can make.
I thought if the board makes such a decision in a company in which many of the core employees have been hired by the CEO and have been working very closely and happily with the CEO, they will need to have a very solid case. And there needs to be a lot of deliberation among core engineers and players in the company before such a decision is being made. Apparently that has not been the case. I have difficulty to understand why people behaved in the way in which they did.
The outcome is that OpenAI is more unified than ever. It's basically 95% agreement about employees that they are going to leave the company if it doesn't reinstate the CEO. It's almost unheard of. This is like an Eastern European communist dictatorship is fake elections, but it was not fake. It was basically people getting together overnight and getting signatures.
for a decision that gravely impacts their professional careers. Many of them are on visa that depend on continuous employment within the company, so they enter actual risks for a time. And I also suspect that a lot of the discussions that happened were bluffs, right, when the board said, yes, they want to reinstate them, but then Waffle then came out with Emmett Scheer, who is a pretty good person, but it's not clear why the Twitch CEO would be the right person to lead OpenAI suddenly.
So I don't even know whether the decision was made because there were personal disagreements about communication styles or whether it was about the direction of the company where members of the board felt that AI is going to be developed too quickly and should be slowed down significantly. And the strategy of Sam Altman to run CHED GPT at a loss
and making up for this by speeding up the development and getting more capital in and thereby basically creating an AGI or bus strategy for the company might not be the right strategy. Also, the board members don't hold equity in the company. So this is the situation where the outcome of their decision is somewhat divorced from their own material incentives and it is more aligned with their political or ideal ideals that they might have or the goals that they have. And again,
Not all of them are hardcore AI researchers. Some of them are. I don't really know what the particular discussions have been in there. And of course, I have more intimate speculations at some discussions with people at OpenAI, but I cannot disclose the speculations, of course. And so at the moment, I can only summarize in some sense what's publicly known and what you can read on Twitter. It's super exciting. It has kept us all awake for a few days. It's a fascinating drama.
And I'm somewhat frustrated by people say, oh my God, this is destroyed trust in open AI. If the decisions can be so erratic because open air should be like a bureaucracy that is not moving in a hundred years. No, this is part of something that is super dynamic and is changing all the time. I think that what the board should probably have seen is that the best possible outcomes that I could have achieved is that open AI is going to split that.
the best possible in the sense of the board trying to fire Sam Altman to change the course of the company. They would have created one of the largest competitors to OpenAI. And so basically an anti-anthropic on the other side of OpenAI that is focusing more on accelerating AI research. It would have been clear that many of the core team members would join it and it would destroy a lot of the equity that OpenAI currently possesses. And it would take away large portions of OpenAI's largest customers, Microsoft.
These are some observations. Sam is back now. Yes. It was clear that it would happen. This move by Satya Nadella to say he works now for Microsoft happened not after negotiating a new organization for a month. It happened in an afternoon after it was announced that the board now has another candidate that they secretly got talked into taking on this role.
Microsoft basically set up as a threat. Okay, they're all going to come to us and every open AI person who wants cannot join Microsoft in a dedicated autonomous unit with details that are yet to be announced, but they're not going to be materially or worse off or research wise worse off. So this is a backstop that Microsoft had to implement to prevent its stock from tumbling.
Monday morning. So Microsoft moved very fast on Sunday and decided we are going to make sure that we are not going to create a situation that is worse for us than it was before. And this creates enormous pressure on OpenAI to basically decide either we are going to be alone without most of the core employees and without our business model. But having succeeded in what the board wants,
And Carl, what do you make of it, the whole fiasco?
I was listening with fascination. I think you have more than enough material to keep your viewers engaged. Is OpenAI going to be ingested by Microsoft or not then? Do you think OpenAI is going to survive by itself?
Some people are joking that OpenAI's goals is to make Google obsolete, to replace search by intelligence, and Google is too slow to deliver a product to deal with this impending competition. OpenAI has rapidly growing in the last few months, has hired a lot of people who are focusing on product and customer relationships. The core research team has been growing much more conservatively. And I think that
Microsoft was a natural partner for OpenAI in this regard because Microsoft is able to make large investments and yet is possibly not as agile as Google. The risk that if OpenAI would partner with Google as a main customer that Google at some point would just walk away with the core technology and some of the core researchers might be larger than this Microsoft, but they can only speculate there. So the last question for this podcast is how is it that you all prevent
An existential crisis from occurring with all this talk of the self as an illusion or our beliefs which are so associated with our conception of ourselves. Mutable identities and competing, contradictory theories of terrifying reality being entertained. Well, Carl. This Marshawn beast mode Lynch prize pick is making sport season even more fun on prize picks, whether
A football fan, a basketball fan, it always feel good to be right.
Right now, new users get $50 instantly in lineups when you play your first $5. The app is simple to use. Pick two or more players. Pick more or less on their stat projections. Anything from touchdown to threes. And if you write, you can win big. Mix and match players from any sport on PrizePix, America's number one daily fantasy sports app. PrizePix is available in 40 plus states, including California, Texas,
I'm just trying to get underneath the question.
These the kind of illusions i think we're talking about are the stuff of the lived world and the experienced world and they are not
weak or facile or facsimiles of reality. These are the fantastic objects, belief structures that constitute reality. So literally, as I'm sure we've said before, the brain as a purveyor of these fantasies, these illusions is fantastic, literally, because it has the capacity to entertain these fantasies.
So I don't think there should be any worry about somehow not being accountable to reality. These are fantastic objects that we have created, co-created, you could argue given some of our conversations, that constitute our reality.
I think that existential crisis is a good thing. It basically means that you are getting at a point where you have a transition in front of you, where you basically realize that the current model is not working anymore and you need a new one. And the crisis, usually an existential crisis doesn't necessarily result in death. It typically results in transformation into something that is more sustainable because it understands itself and its relationship to reality better.
The fact that we have existential questions and that we want to have answers for them is a good thing. When I was young, I thought I don't want to understand how music actually works because it would remove the magic. But the more I understood how music works, the more appreciative I became of deeper levels of magic. And I think the same is true for our own minds. It's not like when we understand how it works that it loses its magic. It just removes the stupidity of superstition and gives us something that
Thank you, Yoshi. Thank you, Carl. There's a litany of points for myself, for the audience, for all of us to chew on.
Over the course of the next few days, maybe even weeks, thank you. Thank you, Kurt, for bringing us together. Karl, I really enjoyed this conversation with you. It was brilliant. I like that you think on your feet that we have this very deep interaction. I found it interesting that we agree on almost everything. We might sometimes use different terminology but we seem to be looking at the same thing from pretty much the same perspective.
And I also really enjoyed it was it was a very, very engaging conversation. And I love the way that you're not frightened to upset people and tell things that they are looking for a job in academia. Good. I still don't have your balls. Well done. Have a wonderful rest of the day. Thank you. All right, take care. Thanks very much.
By the way, if you would like me to expand on this thesis of multiple overlapping consciousnesses that I had from a few years ago, let me know and I can look through my old notes.
Alright, that's a heavy note to end on. You should know, Josje has been on this podcast several times, one solo, another with Ben Gortzow, another with John Verweke, another with Michael Levin, and one more with Donald Hoffman. Whereas Karl Friston has also been on several times, twice solo, another between Karl Friston and Michael Levin, and another with Karl and Anna Lemke. That one's coming up shortly.
The links to every podcast mentioned will be in the description as well as the links to any of the articles or books mentioned as usual in every single theories of everything podcast are in the description. We take meticulous timestamps and we take meticulous notes. If you'd like to donate because this channel has had a difficult time monetizing with sponsors and sponsors are the main bread and butter of YouTube channels, then there are three options. There's Patreon, which is a monthly subscription. It's patreon.com slash Kurt Jaimungal. Again, links are in the description.
There's also PayPal for one time sums. If you like, it's also a place where you can donate monthly. There's a custom way of doing so. And the amount that goes to the creator, aka me in this case, is greater on PayPal than on Patreon because PayPal takes less of a cut. There's also cryptocurrency. If you're more familiar with that and the links to all of these are in the description. I'll say them aloud in case you're away from the screen. It's tinyurl.com slash lowercase. All of this is lowercase.
PAYPAL
Thank you. Thank you for your support. It helps Toll continue to run. It helps pay for the editor who's doing this right now. I and my wife are extremely grateful for your support. We wouldn't be able to do this without you. Thank you.
The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories and build as a community our own toes. Links to both are in the description.
Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well.
Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments, hey, toll listeners also gain from replaying. So how about instead re-listening on those platforms?
iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 6841,
"audio_duration_seconds": 9694.53,
"completed_at": "2025-12-01T00:29:02Z",
"segments": [
{
"end_time": 26.203,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
},
{
"end_time": 53.234,
"index": 1,
"start_time": 26.203,
"text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
},
{
"end_time": 64.531,
"index": 2,
"start_time": 53.558,
"text": " As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
},
{
"end_time": 85.162,
"index": 3,
"start_time": 66.22,
"text": " There could be multiple consciousnesses. Of course, one will not be aware of the other and possibly not even able to infer the agency even if it was. We do not become conscious after the PhD. We become conscious before we can drag a finger. So I suspect that consciousness allows the self-organization of information processing systems in nature."
},
{
"end_time": 110.896,
"index": 4,
"start_time": 86.852,
"text": " Josje Bach and Karl Friston, today's Theolocution guests, are known for their work in artificial intelligence, neuroscience, and philosophical inquiry. Bach, an AI researcher, delves into cognitive architectures and computational models of consciousness and psychology. Friston, a neuroscientist, is lauded for his development of the Free Energy Principle, a theory explaining how biological systems maintain order."
},
{
"end_time": 128.319,
"index": 5,
"start_time": 111.323,
"text": " This framework of neural processes is rooted in thermodynamics and statistical mechanics. Josje has been on this podcast several times, one solo, another with Ben Gortzel, another with John Vervecky, another with Michael Levin, and one more with Donald Hoffman. Whereas Karl Friston has also been on several times,"
},
{
"end_time": 153.439,
"index": 6,
"start_time": 128.319,
"text": " The first hour of today's talk is broadly on agreements so that we can establish some terms. The second hour roughly is on points of divergence, and the third is on more dark philosophical implications, as well as how to avoid existential turmoil precipitated by earnestly contending with these heavy ideas."
},
{
"end_time": 170.93,
"index": 7,
"start_time": 153.439,
"text": " For those of you who don't know, my name is Kurt Jaimungal, and there's this podcast here called Theories of Everything, where we investigate theories of everything from a physics perspective, primarily as my background's in math and physics, but as well as I'm interested in the larger questions on reality, such as what is consciousness? What constitutes it?"
},
{
"end_time": 199.104,
"index": 8,
"start_time": 170.93,
"text": " What gives rise to consciousness? What counts as an explanation? You could think of this channel as exploring those questions to you while at least I sit and ponder at night time and daytime incessantly. Enjoy this Theo Locution with Joscha Bach and Karl Friston. All right. Thank you all for coming on to the Theories of Everything podcast. What's new since the last time we spoke? Joscha the last time was with Ben Gortzow and Karl the last time was at the Active Inference Institute. So, Joscha, please."
},
{
"end_time": 229.002,
"index": 9,
"start_time": 200.196,
"text": " There's so much happening in artificial intelligence. We have more on a weekend than a normal TV show has in seven seasons. So it's hard to say what's new. For me personally, I've joined a small company that is exploring an alternative to the perceptron. I think that the way in which current neural networks work is very unlike our brain. But I don't think that we have to imitate the brain. We have to figure out what kind of mathematics the brain is approximating."
},
{
"end_time": 257.568,
"index": 10,
"start_time": 229.684,
"text": " Very similar actually. I guess what's new in the larger scheme of things of course is the advent of large language models and all the machinations that surround that and the focus that that has caused in terms of what do we require of intelligent systems, what do we require"
},
{
"end_time": 287.517,
"index": 11,
"start_time": 257.739,
"text": " of artificial intelligence, what's the next move to generalized artificial intelligence and the like. So that's been certainly a focus of discussions both in academia and in industry in terms of positioning ourselves for the next move and the implications it has both in terms of understanding the mechanics of belief updating and the move from the age of information to the age of intelligence but"
},
{
"end_time": 310.93,
"index": 12,
"start_time": 288.131,
"text": " Right. I read your paper, Mortal Computation, a Foundation for Biomimetic Intelligence."
},
{
"end_time": 330.23,
"index": 13,
"start_time": 311.288,
"text": " Well, we can get right into that, Carl. On page 15, you define what a mortal computation is as it relates to Markovian blankets. Can you please recount that? And further, you quote Kierkegaard, which says that life can only be understood backwards but must be lived forwards. So how is that connected to this? Right."
},
{
"end_time": 353.046,
"index": 14,
"start_time": 330.759,
"text": " You are a voracious reader. That was only a few days ago. I do my research, man. And also, I did not write that. That was my friend, Alexander. You can take the credit. We'll remove this part. I can't take the credit because I don't know about any of the philosophy, but I thought it was largely"
},
{
"end_time": 373.78,
"index": 15,
"start_time": 353.353,
"text": " his ideas but they resonate certainly again with this sort of notion of a commitment to biometric understanding of intelligence and that paper that particular paper sort of revisits the notion of mortal computation in terms of what does it mean to be a mortal computer and"
},
{
"end_time": 391.032,
"index": 16,
"start_time": 374.326,
"text": " The importance of the physical instantiation, the substrate on which the processing is implemented as being part of the computation in and of itself."
},
{
"end_time": 419.07,
"index": 17,
"start_time": 391.305,
"text": " You know, that speaks closely to all sorts of issues, you know, the potential excitement about neuromorphic computing, if you're a computer scientist, the importance of in-memory processing. So, technically, you're trying to elude the Van Neumann bottleneck or the memory wall. And I introduce that because that speaks to"
},
{
"end_time": 434.923,
"index": 18,
"start_time": 419.65,
"text": " From an academic point of view, the importance of efficiency in terms of what is good belief updating, what is good, what is intelligent processing, but from a more societal point of view, the enormous"
},
{
"end_time": 460.725,
"index": 19,
"start_time": 435.282,
"text": " drain on our resources incurred by data farms, by things like large language models in eating up energy and time and money in a very non-biometric way. So I think mortal computation as a notion, I think has probably got a lot to say about debates in terms of direction of travel, certainly in artificial intelligence research."
},
{
"end_time": 490.077,
"index": 20,
"start_time": 462.022,
"text": " But you'll have to unpack the philosophical reference for me. So, Josha, you also had a paper called A Path to Generative Artificial Selves with your co-author, Leanne Gaborah. Gaborah, sorry. I don't even know if I'm pronouncing that correctly. Toward the end of the paper, you had some criteria about selfhood, something called Max Raff, which has Raffs as a subset, and there were about six or seven criteria. Can you outline what you were trying to achieve with that? What Raff is?"
},
{
"end_time": 520.128,
"index": 21,
"start_time": 490.333,
"text": " and what does personal style have to do with any of this? Liane likes to express her ideas in the context of autocatalytic networks. But if we talk to a general audience, I think rather than trying to unpack this particular strain of ideas and translate it into the way in which we normally think about these topics, I think it's easier to start directly from the bottom, from the way in which information processing systems in nature differ from those that we are currently building."
},
{
"end_time": 543.422,
"index": 22,
"start_time": 520.538,
"text": " in our GPUs, because the stuff that we put in our GPUs is designed from the outside in. We basically have a substrate with well-defined properties. We design the substrate in such a way that it's fully deterministic, so it does exactly what we want it to do. And then we impose a function on it that is computing exactly what we want it to compute. And so we design from scratch"
},
{
"end_time": 553.046,
"index": 23,
"start_time": 543.882,
"text": " what that system should be doing, but it's only working because the system is at some lower level already implementing all the necessary conditions for computation."
},
{
"end_time": 577.739,
"index": 24,
"start_time": 553.831,
"text": " And we are implementing a functional approximator on it that does functional approximation to the best of our own understanding. There's a global function that is executed on this neural network architecture. And in biology, this doesn't work also in social systems. These are all systems where you could say they are from the inside out. So basically there are local agents, cells, that have to impose structure on the environment."
},
{
"end_time": 598.78,
"index": 25,
"start_time": 577.927,
"text": " And at some point, they discover each other and start to collaborate with each other and replicate the shared structure. But before this happens, there's only chaos around them, which they turn gradually into complexity. And so the intelligence that we find in nature is something that is growing from the inside out into a chaotic world, into an unknown world. And this"
},
{
"end_time": 627.841,
"index": 26,
"start_time": 599.497,
"text": " is a very different principle that leads to different architectures. So when we think about an architecture that is going from the inside out, it needs to be colonizing in a way, and it needs to impose an administration's environment that basically yields more resources, more energy than the maintenance of this administration costs. And it also needs to be able to defend itself against competing administrations that would want to do the same thing. So you are the set of principles."
},
{
"end_time": 658.2,
"index": 27,
"start_time": 628.268,
"text": " that outcompetes all the other principles that could occupy your volume of space. And the systems that do this basically need to have a very efficient organization, which at some point requires that they model themselves, that they become to some degree self-aware. And I think that's why from a certain degree of complexity, the forms of organization, if you find both in minds and in societies, need to have self-models. They need to have models about what they are and how they relate to the world."
},
{
"end_time": 687.875,
"index": 28,
"start_time": 658.865,
"text": " And this is what I call sentience in the narrow sense. It's not the same thing as consciousness. Consciousness is this real time perceptual awareness of the fact that we are perceiving things that creates our perceptual individual subjective now. But sentience is something that I think can also be attained by say a large corporation that is able to model its own status, its own existence in a legal and practical and procedural way."
},
{
"end_time": 717.739,
"index": 29,
"start_time": 688.285,
"text": " and that is training its constituents, the people who enact that agent in following all the procedures that is necessary for keeping that sentience larger system that is composed of them alive. And so when we try to identify principles that could be translated into nervous systems or into organisms consisting out of individual self-interested cells, we see some similarities. We basically can talk about how self-stabilizing agents emerge in self-organizing systems."
},
{
"end_time": 736.425,
"index": 30,
"start_time": 718.66,
"text": " So Carl, I know quite a slew was said, if you don't mind saying, what about what Yoshi had spoken about coheres with your model, your research or what contravenes it? No, I was just marbling how, how conciliant it is using a lot of my favorite words."
},
{
"end_time": 752.654,
"index": 31,
"start_time": 736.766,
"text": " I never heard the inside out."
},
{
"end_time": 769.462,
"index": 32,
"start_time": 753.234,
"text": " This is"
},
{
"end_time": 785.555,
"index": 33,
"start_time": 769.872,
"text": " Actively sampling and actively generating hypotheses for sensations and crucially you are in charge of the sensory data that you are making sense of which speaks exactly to I think what Joshua was saying in terms of"
},
{
"end_time": 812.381,
"index": 34,
"start_time": 786.186,
"text": " and designing and orchestrating and creating an ecosystem in that sort of inside out way. That sounds absolutely consistent with certainly the perspective on self-organization to non-equilibrium steady state. So talking about sort of stable, sustainable kinds of self-organization, again, that you see in the real world and quintessentially bio-mimetic."
},
{
"end_time": 841.988,
"index": 35,
"start_time": 812.79,
"text": " If you wanted, I think, to articulate what we've just heard from the point of view of a physicist who's studying non-equilibrium study states, that's exactly the kind of thing that you get. Even to the notion of the increasing complexity of a structural sort that requires this sort of consistent harmonious"
},
{
"end_time": 854.343,
"index": 36,
"start_time": 842.892,
"text": " I'm"
},
{
"end_time": 882.892,
"index": 37,
"start_time": 854.753,
"text": " Another key point that was brought to the table was this notion of how essentially it has to have a self-model, either immediately or as a matter of the early cybernetics movement and notions of the good regulator theorem from Ross Ashby. But I think Josh has taken that slightly one step further than Ashby and his colleagues in the sense it is a model of self. And I think that's"
},
{
"end_time": 911.067,
"index": 38,
"start_time": 883.285,
"text": " With TD Early Pay, you get your paycheck up to two business days early."
},
{
"end_time": 934.309,
"index": 39,
"start_time": 911.459,
"text": " Which means you can go to tonight's game on a whim. Check out a pop-up art show. Or even try those limited edition doughnuts. Because why not? TD Early Pay. Get your paycheck automatically deposited up to two business days early for free. That's how TD makes payday unexpectedly human."
},
{
"end_time": 965.674,
"index": 40,
"start_time": 937.705,
"text": " Almost invariably, when I speak to both of you, the concept of self comes up. I think we could do a control F in the transcript and we'll see that it's orders of magnitude larger than the average amount of times that word is mentioned. And I'm curious as to why. Well, in part, that's because of the channel, the nature of this channel. But is there something about the self that you all are trying to solve? Are you trying to understand what is the self? Are you trying to understand yourselves? Carl or Yoshi, if you want to tackle that."
},
{
"end_time": 992.295,
"index": 41,
"start_time": 967.21,
"text": " Well, the problem of naturalizing the mind is arguably the most important remaining project of human philosophy. And it's risky and it's fascinating. And I think it was at the core of the movement when artificial intelligence was started. It's basically the same idea that Leibniz and Frege and Wittgenstein pursued and basically this idea of mathematizing the mind."
},
{
"end_time": 1011.152,
"index": 42,
"start_time": 992.688,
"text": " And the modern version of mathematics is constructive mathematics, which is also known as computation. And this allows us to make models of minds that we can actually test by re-implementing them. It also allows us to at some point connect philosophy and mathematics, which means that we will be able to say things in a language"
},
{
"end_time": 1040.674,
"index": 43,
"start_time": 1011.613,
"text": " that is both so tight that it can be true and we can determine the truth of statements in a formal way. And the other side so deep and rich that we can talk about the actual reality that we experience and observe. And to close this gap between philosophy and mathematics, we need to automate the mind because our human minds are too small for this. But we need to identify the principles that are approximated in the workings of biological cells that model reality."
},
{
"end_time": 1066.544,
"index": 44,
"start_time": 1041.34,
"text": " and then scale them up in the substrate that can scale up better than the biological computations in our own skulls and bodies. And this is one of the most interesting questions that exists. I believe it is the most interesting and most important question that exists. The understanding of our personal self and how this relates to our mind and how our mind is implemented in the world is an important part of this."
},
{
"end_time": 1086.271,
"index": 45,
"start_time": 1067.022,
"text": " And while it's personally super fascinating, I guess also for many of the followers of your channel, it's quite programmatic in its name and direction. This is to me almost incidental. On the other hand, notice an absence of seriousness in a lot of neuroscientists and AI researchers"
},
{
"end_time": 1116.237,
"index": 46,
"start_time": 1086.664,
"text": " who do not actually realize in their own work that when they think about the mind and mental processes and mental representations and so on, that they actually think about their own existential condition and have to explain this and integrate this. So we have to account for who we are in this way. And if you actually care about who we are, we have to find models that allow us to talk about this in an extremely strict, formal and rational way. And our own culture has, I think, a big gap in its metaphysics and"
},
{
"end_time": 1145.333,
"index": 47,
"start_time": 1116.459,
"text": " which happened after we basically transcended the Christian society. We kicked out a lot of terms that existed in the Christian society to talk about mind, consciousness, intentionality and so on, because they seem to be superstitious, overloaded with religious mythology and not tenable. And so in this post-enlightenment world, we don't have the right way to think about what consciousness and self and so on is. And part of the project of understanding the mind is"
},
{
"end_time": 1175.691,
"index": 48,
"start_time": 1145.759,
"text": " to rebuild these foundations, not in any kind of mythological superstitious way, but by building on our first principles thinking that we discovered in the last 200 years and then gradually build a terminology and language that allows us to talk again about consciousness and mind and how we exist in the world. So for me, it's a very technical notion, the self. It's just the model of an agent's interest in the universe that is maintained"
},
{
"end_time": 1187.841,
"index": 49,
"start_time": 1176.101,
"text": " by a system that also maintains a model of this universe. So my own self is basically a puppet that my mind maintains about what it would be like if there was a person that cared."
},
{
"end_time": 1210.674,
"index": 50,
"start_time": 1188.575,
"text": " And I perceive myself as the main character of that story. But I also noticed that there is intelligence outside of this existing coexisting with me in my own brain that is generating my emotions and generating my world model, my perception and so on. Basically keeping the score and all the pain and pleasure that experience is generated by intelligent parts of my mind outside of my personal self."
},
{
"end_time": 1232.585,
"index": 51,
"start_time": 1211.34,
"text": " Well, Carl, what's left to be said?"
},
{
"end_time": 1256.049,
"index": 52,
"start_time": 1233.183,
"text": " Well, he's just said it. I can see there's a pattern here. I'll say what he just said in different words if I can. So yeah, I love this notion of using the word naturalization. I think naturalizing things in terms of mathematics and possibly physics is exactly the right way to go and it does remind me of"
},
{
"end_time": 1272.159,
"index": 53,
"start_time": 1256.323,
"text": " My friend Chris Fields' notion that our job is basically to remove any bright lines between physics, biology, psychology and now philosophy and I think mathematics is the right way to do that or at least"
},
{
"end_time": 1301.015,
"index": 54,
"start_time": 1272.466,
"text": " the formalism, the calculus that you get from mathematics or possibly category theory or whatever that can be instantiated in silico or ideally in any mortal computation. So I think that's a really important point and it does speak I think to a broader agenda which was implicit in Josh's review which is the ability to share"
},
{
"end_time": 1328.643,
"index": 55,
"start_time": 1301.015,
"text": " To share a common ground, to share a generative model of us in a lived world where that lived world contains other things like us. So the one I think requisite for just existing in the shared world is actually having a shared model and then that brings all sorts of interesting questions to the table about is my model of me the same kind of model that I'm using"
},
{
"end_time": 1358.422,
"index": 56,
"start_time": 1329.189,
"text": " of you, to explain you, to ascribe to you intentionality and all those really important states of being or at least hypotheses from the point of view of predictive processing accounts, hypotheses that I am in this mental state and you are in that mental state. So I think that was a really important thing to say that we need to naturalize our understanding of the way that we work in our worlds."
},
{
"end_time": 1385.333,
"index": 57,
"start_time": 1358.814,
"text": " In relation to the importance of self, again I'm just thinking from the point of view of a physicist that you cannot get away from the self if you just start at the very beginning of information theoretic treatments, self-information for example. That's where it all starts for me certainly regarding variational free energy as a variational bound on self-information."
},
{
"end_time": 1409.275,
"index": 58,
"start_time": 1385.674,
"text": " And then you talk about self-organization, talking all the way through to the notion of self-evidencing, as Jacob Howey would put it, at every point you are writing down or naturalizing the notion of self at many, many different levels. And indeed, if one generalizes that,"
},
{
"end_time": 1420.418,
"index": 59,
"start_time": 1410.623,
"text": " You're almost specifying the central importance of thingness in the sense that I am a thing and by virtue of saying I"
},
{
"end_time": 1450.35,
"index": 60,
"start_time": 1420.64,
"text": " I am implying, inducing a certain self-aspect to me as a thing and again that's the starting point for certainly the free energy principles approach to this kind of self-organization. I repeat, I think Josh is taking us one step further though in terms of we can have ecosystems of things but when those things now start to have to"
},
{
"end_time": 1459.906,
"index": 61,
"start_time": 1450.623,
"text": " of play the game of modeling whether you cause that or whether I cause that, that now brings to the table"
},
{
"end_time": 1482.585,
"index": 62,
"start_time": 1460.316,
"text": " an important model of our world that there is a distinction between me and you. And as soon as you have this fundamental distinction, which of course would be something that a newborn baby would have to spend hours, possibly months building and realizing that mum is separate from the child herself."
},
{
"end_time": 1505.162,
"index": 63,
"start_time": 1482.978,
"text": " So I think that's terribly important. One final thing, just to speak again to the importance of articulating your self-organization in terms of things like intentions and beliefs and stances"
},
{
"end_time": 1526.937,
"index": 64,
"start_time": 1505.572,
"text": " I think that's also quite crucial and what it means if you want to naturalize it mathematically you have to have a calculus of beliefs. So you're talking basically a formulation either in terms of information theory or probability theory where you're now reading the probabilistic description of this universe and the way that we are part of that universe"
},
{
"end_time": 1542.841,
"index": 65,
"start_time": 1527.244,
"text": " in terms of beliefs and starting to think about all of physics in terms of some kind of belief updating. Carl, you use the word shared model. Now, is that the same as shared narrative?"
},
{
"end_time": 1564.65,
"index": 66,
"start_time": 1543.541,
"text": " Ever find yourself questioning reality and then you snap back to it, remembering that, hey, you have nothing planned for dinner? That's where HelloFresh comes in. They deliver pre-portioned, farm-fresh ingredients and sensational seasonal recipes right to my door. They have over 40 choices every week, which keeps me and my wife exploring new flavors. I did the pronto option."
},
{
"end_time": 1590.213,
"index": 67,
"start_time": 1564.65,
"text": " They've also got this quick and easy option that makes 15 minute meals. There's also options if you're vegetarian, if you only eat fish. Something that I love is that their deliveries show up right on time, which isn't something that I can say about other food delivery services. This punctuality is a huge deal for both myself and my wife. Plus, we love using HelloFresh as a way to bond. We cook together. It's super fun when it's all properly portioned out for you already."
},
{
"end_time": 1617.517,
"index": 68,
"start_time": 1590.213,
"text": " So are you still on the fence? Well, it's cheaper than grocery shopping and 25% less expensive than takeout. The cherry on top? Head to hellofresh.com slash theories of everything free and use the code theories of everything free all as one word, all as caps, no spaces for a free breakfast for life. That's free breakfast people. Don't forget HelloFresh is America's number one meal kit. Links in the description."
},
{
"end_time": 1634.462,
"index": 69,
"start_time": 1618.217,
"text": " Karl, you use the word shared model. Now, is that the same as shared narrative? Yes, common ground. I mean, I use it literally in the sense of a generative model"
},
{
"end_time": 1659.684,
"index": 70,
"start_time": 1635.179,
"text": " somebody in generative AI would understand the notion. So if we talk about self-model as a special kind of generative model that actually entertains the hypothesis that I am the cause of my sensations and you know, Joshua took us through the myriad of sensations that I need to explain, then we're talking about self-models as part of my generative model"
},
{
"end_time": 1681.613,
"index": 71,
"start_time": 1659.684,
"text": " Where that includes this notion that I am the agent that is actually gathering the data that the journey is modeling. So the journey model is just a simple specification. Again, from the physics perspective, it's actually just a probabilistic description of the characteristic states of something, namely me."
},
{
"end_time": 1711.92,
"index": 72,
"start_time": 1682.142,
"text": " that can be then used to describe the kind of belief updating that this model would have to evince in order to exist when embedded in a particular universe. Other readings of a generative model would be exactly the common ground that we all share. Part of my generative model would be the way that I broadcast my inference, my belief updating using language, for example."
},
{
"end_time": 1738.217,
"index": 73,
"start_time": 1712.21,
"text": " That requires a shared generative model about the semiotics and the kind of way that I would articulate or broadcast my beliefs. That generative model is a model of dynamics. It's a model not just of the state of the world, but the way that the transition dynamics, the trajectories, the paths. I'm using your word narrative just as a euphemism."
},
{
"end_time": 1765.384,
"index": 74,
"start_time": 1738.217,
"text": " for a certain kind of path through some model state space. So if you and I share the same narratives in the sense that we are both following the same conversation and the same mutual understanding, we are sharing our beliefs through communication, then that is exactly what I meant. For that to happen, we have to have the same kind of generative model. We have to speak the same language and we have to"
},
{
"end_time": 1792.602,
"index": 75,
"start_time": 1765.674,
"text": " I suspect that what makes this project so difficult is that our models of reality are necessarily coarse grained. They don't describe"
},
{
"end_time": 1819.684,
"index": 76,
"start_time": 1792.875,
"text": " the universe as it is in a way in which it can exist from the ground up. But they start from the vantage point of an observer that is sampling the universe at a low resolution, both temporal and spatial, and only very few dimensions. And this model that is built on a quite unreliable and deterministic substrate. And this puts limitations on what we can understand with our unaugmented mind. I sometimes"
},
{
"end_time": 1847.449,
"index": 77,
"start_time": 1819.974,
"text": " joke that the AGI's of the future will like to get drunk until the point where they can only model reality with 12 layers or so and they have the same confusions as human physicists when trying to solve the puzzles that physics poses. And they might find this hilarious because many of the questions that have been stamping us during the last 130 years since we have modern physics might be easy to be able to resolve if our minds were just a little bit better."
},
{
"end_time": 1872.517,
"index": 78,
"start_time": 1847.739,
"text": " We seem to be scraping at a boundary of our understanding for a long time. And now we are, I think, at the doorstep of new tools that can solve some puzzles that we cannot solve and then break them down for us in a way that is accessible to us, because they will be able to understand the way in which we model the world. But until then, we basically work in overlapping realities."
},
{
"end_time": 1901.22,
"index": 79,
"start_time": 1873.131,
"text": " We have different perspectives on the world and the more we dig down, the more subtle the differences between our models of reality become. And this also means that if you have any kind of complex issue, we tend not to be correct in groups. We tend to be only sometimes individually correct in modeling them and we need to have a discourse between individual minds about what they observe and what they model. Because as soon as a larger group gets together and tries to vote about how to understand"
},
{
"end_time": 1930.316,
"index": 80,
"start_time": 1901.544,
"text": " concept like variational free energy, all the subtleties are going to be destroyed because not all of the members of the groups will be understanding what we're talking about. So they will replace the most subtle understandings with common ground that is not modeling reality with the degree of resolution that would be necessary or they're not able to break things down to first principles. And this first principles understanding I think is an absolute prerequisite when we want to solve foundational questions."
},
{
"end_time": 1949.531,
"index": 81,
"start_time": 1931.186,
"text": " I sometimes doubt whether physics is super well equipped for doing this. When I was young, I thought physics is about describing physical reality, the world that we are in at some level. And now I see that physics is an art. It's the art of describing arbitrary systems using short algebraic equations."
},
{
"end_time": 1977.363,
"index": 82,
"start_time": 1950.845,
"text": " And the stuff that cannot be described with short algebraic equations yet, it's like chemistry is ignored by physicists and left to lesser minds. And only 8% of the physicists after their degree end up working in physics in any traditional sense. The others work in process design and finance and healthcare and many, many other areas where you can apply the art of modeling arbitrary systems using short algebraic equations."
},
{
"end_time": 2006.135,
"index": 83,
"start_time": 1977.773,
"text": " And whenever that doesn't work, physicists are not worth very much. I've seen physicists trying to write programs and they really, many of them have this bias of trying to come up with algebra and geometry, where calculus would be much better or where automata would be much better. And nature doesn't care about this. Nature is using whatever works and whatever can be discovered. And very often that is close to the toolkit of this intellectual tradition of the physicists."
},
{
"end_time": 2017.21,
"index": 84,
"start_time": 2006.459,
"text": " But I think it's sometimes helpful to see that all these intellectual traditions that our civilization has built start out with some foundational questions and then congregate around a certain set of methods."
},
{
"end_time": 2041.681,
"index": 85,
"start_time": 2017.841,
"text": " and it can be helpful to just go to the outside of all these disciplines for a while and then move around between them and look at them and study their tools and see what common ground and what differences we can discover. I was quite shocked when I learned that a number of machine learning algorithms had been discovered in the 80s and 90s by econometrists and were just ignored in AI and had to be reinvented from scratch."
},
{
"end_time": 2059.923,
"index": 86,
"start_time": 2042.534,
"text": " I suspect there's a lot of these things happening in our Tower of Babel that we are creating across sciences because our languages start to differ in subtle ways and sometimes fundamentally miss modern reality or ignore it to the point where I think most living neuroscientists are practically dualists."
},
{
"end_time": 2088.916,
"index": 87,
"start_time": 2060.265,
"text": " They will not say it out loud because that's been frowned upon, but they don't actually see a way to break down consciousness, mind and self into the stuff that would run on neurons. Or they don't even think about the causal structure in the same way as you would need to get to this point. And as a result, they believe that thinking about these concepts is fundamentally unscientific. It's outside of the pursuit of science and they do this only in church on Sundays. Yeah. So what's the solution to this Tower of Babel?"
},
{
"end_time": 2105.299,
"index": 88,
"start_time": 2090.776,
"text": " Do you also see the problem similarly and do you see the solution similarly?"
},
{
"end_time": 2130.93,
"index": 89,
"start_time": 2105.964,
"text": " I think I do. It's a nice biomimetic AI. I love this notion. I hope no physicists are watching. Also, the only physicists that I know all want to do neuroscience or psychology in addition to economics and healthcare, which is all small particle physics. It's either neuroscience or small particle physics."
},
{
"end_time": 2149.838,
"index": 90,
"start_time": 2131.374,
"text": " And as I get older, I'm increasingly compelled by arguments that I've read from very senior, old physicists that it's all about measurement, it's all about observation and in a sense, all of physics is just one of these"
},
{
"end_time": 2176.254,
"index": 91,
"start_time": 2150.162,
"text": " generated models that houses particular capacity to disseminate itself so that we do have this common language and this common ground. So just to reiterate one of Joshua's points, physics in and of itself is just another story that we find particularly easy to share. But I do take the point that even within physics,"
},
{
"end_time": 2194.206,
"index": 92,
"start_time": 2176.459,
"text": " There is this tendency"
},
{
"end_time": 2216.92,
"index": 93,
"start_time": 2195.538,
"text": " The free energy principle is unashamedly committed to classical formulations of the universe in terms of random dynamical systems and large value equations and that would horrify quantum physicists and quantum information theorists who just wouldn't think about that. That's why I slipped in that reference to"
},
{
"end_time": 2234.462,
"index": 94,
"start_time": 2216.92,
"text": " Add reference frames around because when we're talking about now is the alignment of quantum frames of reference but that uses a completely different language and. That i think is your part of the problem that just bring to bring to the floor."
},
{
"end_time": 2263.916,
"index": 95,
"start_time": 2234.923,
"text": " that what we need is something that's superordinate, that joins the dots and may well require transcending the particular common ground or physics or calculus or philosophies that have endured. So if by that the artificial intelligence is going to be one way of joining the dots so that people in machine learning don't have to reinvent the wheel every generation,"
},
{
"end_time": 2286.254,
"index": 96,
"start_time": 2264.343,
"text": " and then i think he's absolutely right whether i call that artificial intelligence or not i i'm not so sure i i think it would start to become part of part of a grander ecosystem that would have a natural aspect to it but perhaps that perhaps i could ask you do you actually mean artificial intelligence in the sense that it doesn't have"
},
{
"end_time": 2298.66,
"index": 97,
"start_time": 2286.527,
"text": " an individual scientists or people."
},
{
"end_time": 2326.408,
"index": 98,
"start_time": 2299.906,
"text": " Maybe I don't understand your notions of mortality and biology completely. To me, biology means that the system is made of cells, of biological cells, of cells that are built on the carbon cycle foundation on certain chemical reactions that nature has discovered and translated into machines made from individual molecules that interact in very specific ways. And it's the"
},
{
"end_time": 2355.23,
"index": 99,
"start_time": 2326.954,
"text": " only agent that we have discovered to occur in nature, I think. And all the other agents we discover are made by or of cells. And mortality is an aspect of the way in which multicellular systems adapt to changing environments. They have offspring that mutates and then gets selected against. And as a result, we have a change trajectory that can be calibrated to the rate of change in an ecosystem."
},
{
"end_time": 2367.858,
"index": 100,
"start_time": 2355.896,
"text": " And this is one of the reasons for mortality. Another reason for mortality is if you set up a system that has suboptimal self-stabilization, it is going to deviate from its course."
},
{
"end_time": 2386.817,
"index": 101,
"start_time": 2368.336,
"text": " Imagine you build an institution like the FDA and you set it up to serve certain purposes in society after a few generations, the people, isn't it organization to a very large degree start serving the interests of the organization and the interests that have captured the organization. And so it becomes not only larger and more expensive."
},
{
"end_time": 2413.097,
"index": 102,
"start_time": 2387.227,
"text": " but at some point it's possibly doing more harm than good. That doesn't mean that we don't need an FDA, but it might mean that we have to make the FDA model so it gets reborn every now and then and can put itself back on track based on a specification that outside observers think is reasonable rather than a specification that needs to be negotiated with the existing stakeholders within that organization and the few people who are left outside."
},
{
"end_time": 2442.449,
"index": 103,
"start_time": 2413.626,
"text": " I think this is one of the most important aspects of mortality. But imagine that all of Earth would be colonized by a single agent, something that is able to persist not only across organisms, but it's also able to think using all other molecules that can be turned into computational systems and into representational architectures and agentic architectures. You have a planet that is similar to Stanislav Lem's Solaris, a thinking system that is"
},
{
"end_time": 2453.439,
"index": 104,
"start_time": 2443.148,
"text": " Realizing what it is that realizes that it's basically a sinking planet that is trying to defeat entropy for as long as possible. And this end builds complexity."
},
{
"end_time": 2476.493,
"index": 105,
"start_time": 2453.951,
"text": " Why would the system need to be mortal? And would that system still be biological? It would be self-organizing. It would be dynamic. It would be threatened with death, with non-existence. It would react to this in some way. But I'm not sure if biology and mortality are the right categories to describe it. I think these are more narrow categories that apply to biological organisms in the present setting of the world."
},
{
"end_time": 2507.602,
"index": 106,
"start_time": 2477.739,
"text": " I picked up on a phrase you said, Carl, which is one of the solutions, maybe AI. That's what you were saying in response to Yoshas, which makes me think, had Yosha not mentioned AI as the resolution to the indecipherability across discipline boundaries, what would you have said a solution or the solution would be? Well, I think the solution actually lies in what Joshua was just saying in the sense that if the"
},
{
"end_time": 2537.125,
"index": 107,
"start_time": 2508.046,
"text": " The self-understanding is seen in the context of exchange with others and that provides the right kind of context. I think we're talking, I've used the word a lot now, but I'm talking about an ecosystem at any arbitrary scale and an ecosystem that provides that opportunity for self-evidencing,"
},
{
"end_time": 2560.896,
"index": 108,
"start_time": 2538.746,
"text": " phrase that just is a statement that you've got an itinerant open kind of self-organization that maintains this um minimum entropy state in exactly the same way that joshua was intimating so you know that would be um so i'm just thinking about sort of you know what"
},
{
"end_time": 2579.002,
"index": 109,
"start_time": 2561.357,
"text": " is"
},
{
"end_time": 2603.217,
"index": 110,
"start_time": 2579.241,
"text": " an inevitable aspect of self-organizing systems that will endure over time in the sense of minimizing the entropy of the states that they occupy. And I do think that is the solution, which is why I was pushing back against artificial intelligence, but for a particular reason. The way that it's mortal computation is framed"
},
{
"end_time": 2633.848,
"index": 111,
"start_time": 2603.951,
"text": " certainly in that paper in which I was the second author, is that immortal computers are built around software so they are immortal in the sense you can rerun the same program on any hardware. If the running of the software and the processing that ensues is an integral part of the hardware on which it is run, then it becomes mortal."
},
{
"end_time": 2656.254,
"index": 112,
"start_time": 2634.087,
"text": " And that's important because the opportunity for dying if you are mortal now creates the kind of selective pressure from an evolutionary perspective of exactly the kind that Joshua was talking about. If you don't have the opportunity to die, if you don't have the opportunity to disassemble the FDA because it's no longer fit for purpose,"
},
{
"end_time": 2684.462,
"index": 113,
"start_time": 2656.869,
"text": " then you will not have a sustainable self-organization that continually maintains a low entropy in the sense that it has some characteristic recognizable states. So I think there is a deep connection between self-organization that we see in biological, social, possibly meteorological systems and"
},
{
"end_time": 2711.34,
"index": 114,
"start_time": 2684.804,
"text": " A certain kind of mortality in which, for example, information about the kind of environment that I am fit to survive and to learn about is part of my genomic structure. But to realize that, if you like, evidence accumulation through evolutionary mechanisms, I have to have a life cycle. I have to have to die."
},
{
"end_time": 2734.667,
"index": 115,
"start_time": 2711.8,
"text": " I'm not implying that everybody has to die in order to live. I'm implying that there has to be some particular kind of dynamics. There has to be a life cycle. It could be an economic life cycle. It could be boom and bust, for example, but that has to be part of this self-evidencing and certainly an exchange"
},
{
"end_time": 2764.514,
"index": 116,
"start_time": 2734.94,
"text": " In the kind of multicellular context that Joshua was mentioning. So by mortal, I just mean my reading of mortal in this particular conversation would be say yes, it is the kind of biological behavior that is characteristic of cells that self-assemble but also die. One attractive metaphor that came to mind when talking about the FDA becoming too"
},
{
"end_time": 2790.794,
"index": 117,
"start_time": 2765.06,
"text": " An organization becoming too big for its own good and not being a good model of the system in which it is immersed. So it's not meeting customers needs. It's not even meeting its own needs would be a tumor. So you could understand a lot of the institutional pathologies and geopolitical pathologies and possibly even climate change."
},
{
"end_time": 2809.326,
"index": 118,
"start_time": 2791.015,
"text": " All of this can be read in terms of a process of mortal computation at a certain scale."
},
{
"end_time": 2829.821,
"index": 119,
"start_time": 2809.991,
"text": " where there is an opportunity for things to go away to dissolve. That has to be the case in the same way that either the tumor kills you or it necrosis because it kills off its own blood supply. It can't be any other way really. There is a third way. You can evolve an immune response against tumors."
},
{
"end_time": 2855.265,
"index": 120,
"start_time": 2830.299,
"text": " If you are an organism that lives to be much longer because it has slower generational change, they typically have better defenses against tumors than the shorter-lived organisms like us. Basically, a tumor can be seen as a set of tissues or a subset of agents. You can principally have a tumor in an ant colony that is playing a shorter game than the organism itself and the larger system itself."
},
{
"end_time": 2884.582,
"index": 121,
"start_time": 2855.657,
"text": " and you can sustain the number of tumors if your environment does not put too much pressure on you. But at some point, the tumors are going to bring you down. And so, for instance, I think that the free world has to make at some point a decision of whether it is accepting to be brought down and replaced by a different type of social order or whether it's going to evolve or build or construct or design an immune response against tumors and criteria to identify them and remove them."
},
{
"end_time": 2910.35,
"index": 122,
"start_time": 2885.367,
"text": " I think that's not a natural law. At least I don't see how to prove from first principles that we cannot overcome a problem like institutional calcification or turning of institutions into tumor-like structures functionally. I think it might be possible to do that. The cell itself is not mortal. The cell is pretty much immortal. The individual cells can die and"
},
{
"end_time": 2932.5,
"index": 123,
"start_time": 2910.674,
"text": " disappear. But the cell itself is still the first cell. It's just splitting and splitting and it's alive in all of us. Every cell in our own body is still this first cell just split off from it. Right. And so the way in which organisms die and so on is just a detail in this larger project of the cell, which itself is so far a model."
},
{
"end_time": 2957.91,
"index": 124,
"start_time": 2933.302,
"text": " And when I talk about AI being the solution to everything, of course, I'm joking a little bit. I'm just equating some of the sentiment and part of the enthusiastic culture of my young field. But I'm only joking a little bit because I think that AI has the potential to reverse engineer the general principles of a learning agent."
},
{
"end_time": 2971.015,
"index": 125,
"start_time": 2958.746,
"text": " of a system that is able to model the future and regulate for the future and make functions in an arbitrary way. And I would replace the notion of the hardware, the substrate."
},
{
"end_time": 2997.654,
"index": 126,
"start_time": 2971.613,
"text": " Of course, it's still hardware, but it can be an arbitrary substrate, and the substrate can also be to a large degree software, which means causal principles that are implemented ultimately on physics. But this causal structure ultimately is a protocol layer that allows you to basically implement a representational language in which an agent can realize itself as a causal structure."
},
{
"end_time": 3027.09,
"index": 127,
"start_time": 2998.404,
"text": " And I think that AI is working currently on very different substrates than the biological ones. But there is a superset of these principles that can make AI substrate agnostic. I think that the implication of the Church-Schuling thesis is that it doesn't really matter which hardware you're using. In practice, it does matter because if a hardware is not very deterministic or doesn't give you a lot of memory or is very slow, you will notice big differences."
},
{
"end_time": 3056.032,
"index": 128,
"start_time": 3027.568,
"text": " But if you abstract this away, the representational power and the potential for agency is not really dependent on your hardware. It turns out that the hardware that you're currently using for AI is much, much more powerful than the hardware that biology is using. The reason why AI is so weak compared to human minds or biological systems is because the algorithms that we have discovered, we have discovered them by hand. These were people tinkering. Sorry, what do you mean the AI is weak?"
},
{
"end_time": 3079.258,
"index": 129,
"start_time": 3056.493,
"text": " I mean that in order to get a system that is almost coherent, we need to train it with the entire internet, with almost everything that humans have ever written. And as a result, we get a system that is using tremendously more resources than the human brain has at their disposal. I'm not talking about the computational power that is implemented in an individual cell that might be very large."
},
{
"end_time": 3101.032,
"index": 130,
"start_time": 3079.565,
"text": " But the part of the power of the individual cell that is actually harnessable by the brain for performing computation, that is very little. It's only a small fraction of what the neuron is doing to do its own maintenance, housekeeping, metabolism, communication with neighbors that is actually available for building computation at the brain level. As an example, I sometimes use this"
},
{
"end_time": 3123.336,
"index": 131,
"start_time": 3101.476,
"text": " The stable diffusion rights when they came out. Stability AI is an AI company that makes open source models and they made a vision model by training these GPUs on hundreds of millions of images and text drawn from the internet and cross correlating them until you can type in a phrase and then get a picture that depicts that phrase. It's amazing that this works at all."
},
{
"end_time": 3141.766,
"index": 132,
"start_time": 3123.712,
"text": " requires enormous computational power because it's far less inefficient compared to a human brain that is learning how to draw pictures after seeing things. And these weights, this neural network, they know everything. They basically they know how to draw all the celebrities."
},
{
"end_time": 3170.828,
"index": 133,
"start_time": 3142.227,
"text": " and how to draw all artistic styles and all the plans and everything is in there. And it's just two gigabytes. You can download it. It's only two gigabytes. And it's like 80% of what your brain is doing is captured in these two gigabytes. And it's so much more than what a human brain could reproduce. It's absolutely brute forcing it. At the other time, two gigabytes doesn't seem to be a lot, which suggests that our own brain is probably not storing effectively much more information than a few gigabytes."
},
{
"end_time": 3198.763,
"index": 134,
"start_time": 3171.442,
"text": " That's very humbling. And the reason why we can do so much more with it and so much faster than the AI is not because biological cells are so much more efficient than transistors. It is because they are self-organizing and have been at this game for quite some time and figured out a number of tricks that human engineers couldn't figure out so far. Right. Carl, do you want to expand on points of contention and the mortality and perhaps permanence of a cell?"
},
{
"end_time": 3220.879,
"index": 135,
"start_time": 3199.684,
"text": " I just wanted to celebrate this notion that the cell in a sense is immortal because of course the whole point of this is to try and understand"
},
{
"end_time": 3247.585,
"index": 136,
"start_time": 3221.254,
"text": " systems that endure over long periods of time. And that's what I meant by that. I didn't mean that death meant cessation. I just meant there's a certain life cycle, my tendency in play. So I thought that was nicely illustrated by the notion that the cell is in a sense unending. But the mortal immortality is more about"
},
{
"end_time": 3264.923,
"index": 137,
"start_time": 3249.002,
"text": " Divorcing the software from the substrate and there's a bit of a pushback that if you want to look for differences in the respective arguments then"
},
{
"end_time": 3294.36,
"index": 138,
"start_time": 3265.418,
"text": " A lot of people would say that all that housekeeping that goes on in terms of intracellular machinations and self-organization, that just is basal computation at a particular level and that more macroscopic kinds of belief updating and processing and computation supervene at a certain scale and indeed that progression in a sort of scale invariant sense is"
},
{
"end_time": 3323.012,
"index": 139,
"start_time": 3294.599,
"text": " It is one manifestation of what you were talking about before, that biological things are cells of cells of cells of cells and have increasingly higher kinds of scales and different kinds of computation. But the idea that the first principles apply at every and each level, and it's the same principle at every and at each level. And if you pursue that, one has to ask why modern AI or particularly machine learning is so inefficient."
},
{
"end_time": 3343.712,
"index": 140,
"start_time": 3323.456,
"text": " Dangerously inefficient and there's I think a first principle account of that and the account would go along the following lines that the only objective function that you need to explain existence is a likelihood of you being your marginal likelihood"
},
{
"end_time": 3373.609,
"index": 141,
"start_time": 3344.104,
"text": " That statistically is the model evidence. The model evidence or the log of that evidence can always be written down as accuracy minus complexity. Therefore, to exist is to minimize complexity. Why is that important? Well, first of all, it means that that coarse grading that we were talking about earlier on is not a constraint. It is actually part of an existential imperative to coarse grade in the right kind of way."
},
{
"end_time": 3398.097,
"index": 142,
"start_time": 3374.565,
"text": " The other reason it's important is that there is a thermodynamic link between the complexity scored in terms of belief updating or processing or computation and the thermodynamic cost. And if that's the case, it explains why the direction of travel in terms of your machine learning is so inefficient."
},
{
"end_time": 3419.65,
"index": 143,
"start_time": 3398.404,
"text": " and what it tells you is there is a lower limit on the right way to do things. There is a lower limit on the thermodynamic efficiency and the information computational efficiency specified by the Landauer limit. Why does modern or current machine learning not get anywhere close to that Landauer limit?"
},
{
"end_time": 3438.882,
"index": 144,
"start_time": 3419.889,
"text": " The answer"
},
{
"end_time": 3467.756,
"index": 145,
"start_time": 3438.882,
"text": " is I think the von Neumann bottleneck. It is the memory wall. It is that people are trying to do computation in an immortal sense by running software without careful consideration of the substrate on which they're running or implementing that computation. So I would push back against the notion that it is even going to be possible irrespective of when it's the right direction of travel in terms of"
},
{
"end_time": 3487.773,
"index": 146,
"start_time": 3467.995,
"text": " Artificial"
},
{
"end_time": 3515.657,
"index": 147,
"start_time": 3488.097,
"text": " So it doesn't have to be a biological cell, but certainly has to conform to the same principles of multi-scale self-organization of the most efficient sort. That just is the optimization of the marginal likelihood or the evidence for the states that that particular computing device or computer wants to be in. So that's what I had a slight sort of"
},
{
"end_time": 3538.848,
"index": 148,
"start_time": 3515.913,
"text": " I don't think that's the right way to go about it. I would actually come back to your very initial argument, Joshua, that it has to be much more biologically inspired. It has to be much more biomimetic and part of that sort of inspiration"
},
{
"end_time": 3566.425,
"index": 149,
"start_time": 3539.428,
"text": " is the motivation for looking at the distinction between running of the immortal software on the human architectures, on Nvidia chips, relative to a much more biometric approach, say photonics or neuromorphic computing. I think that really does matter in terms of getting"
},
{
"end_time": 3584.735,
"index": 150,
"start_time": 3566.8,
"text": " Okay, let me push back against this."
},
{
"end_time": 3608.268,
"index": 151,
"start_time": 3585.776,
"text": " First off, I do agree that current AI is brutalist in the sense that it is not making the best use of the available substrates and it's not building the best possible substrates. We have a number of path effects. It's not that the stuff that we are building and using is not clever or so, but it's a far cry from what biology seems to have discovered."
},
{
"end_time": 3636.032,
"index": 152,
"start_time": 3609.155,
"text": " At the same time, there is relatively little funding going into AI and this relatively little energy consumption given what it gives you. If academics hear that it costs $20 million to train a model, they almost faint because they compare this with their departmental budget. But if you would compare this with the cost of making a halfway decent AI movie in Hollywood, it's negligible. So basically what goes into an AI project is far less"
},
{
"end_time": 3657.585,
"index": 153,
"start_time": 3636.408,
"text": " what goes into a Hollywood movie about AI. If you compare this at the scale, if you look at societal benefit of watching an AI movie or watching another blockbuster about the Titanic or so, it's not now, but I think that AI has the potential to be dramatically more valuable than this. I think that AI"
},
{
"end_time": 3687.773,
"index": 154,
"start_time": 3657.978,
"text": " Even though it might sound counterintuitive, it's not using a lot of energy and it's not very well funded at the moment still compared to what the value of it is. Also, the leading labs do not believe that the transformer is going to be the architecture that we have in the end. It just happens to be one of the very few things that currently works at scale that we have discovered that can actually be scaled up in this brutalist way. And it's already better at completing prompts than the average person."
},
{
"end_time": 3717.773,
"index": 155,
"start_time": 3688.234,
"text": " and it's even better than writing code than many people. So it can translate between programming languages. You can write an algorithm down in English and it can even help you to write an algorithm down in English and then translate it in the programming language of your choice. And it's pretty good at it. It can also, if it makes a mistake and it often makes mistakes, understand the compiler messages and then try to suggest fixes that often work. In many ways, I've found that it's already better than a lot of people I've worked with in corporate contexts."
},
{
"end_time": 3744.07,
"index": 156,
"start_time": 3718.114,
"text": " both at writing press releases and at writing code. It's not as good as the top level people in their field, but it's quite surprising. And so there is this interesting open and tantalizing question, can we scale this up by using slightly better loss function, by using slightly more compute, slightly more embedded curated data? And the systems can help us curating data and coming up with different architectures and so on to get this thing to be better at AI research than people."
},
{
"end_time": 3761.374,
"index": 157,
"start_time": 3745.111,
"text": " If that gets better at AI research than people, then we can leave the rest to it and go to the beach. And it will come up with architectures and solutions that are much more efficient than what we have come up with. At the same time, there are many labs and teams that work on different hardware, that work on different algorithms at the same time."
},
{
"end_time": 3785.538,
"index": 158,
"start_time": 3761.834,
"text": " The fact that you see so much news about the transformer at this point is not so much because everybody ignores it and doesn't work on it anymore or has religious beliefs and the transformer being the only good thing. It's because it's the thing that currently works so well. And people are trying to work on all the other things, but the thing that has the most economic impact and the most utility happens to be the stuff that currently works."
},
{
"end_time": 3813.49,
"index": 159,
"start_time": 3785.862,
"text": " And so this made cloud our perception that we think it's the von Neumann architecture and so on. But in some sense, the GPU is no longer a von Neumann architecture. We have many pipelines that work in parallel that take in smaller chunks of memory that are more closely located to the local processor. And while it's not built in the same way as the brain, where all the memory is directly inside of the cell or its immediate vicinity,"
},
{
"end_time": 3840.179,
"index": 160,
"start_time": 3814.002,
"text": " It is much closer to it and it's able to emulate this. And if I look at the leading neuromorphic architectures, I can emulate them on a CPU and it's not slower. This is all stress research stuff that is early stage. But you're not emulating neuromorphic architectures on a CPU for the most part, which is GPU, which is largely because it doesn't give us that many benefits over the existing architectures and libraries."
},
{
"end_time": 3869.019,
"index": 161,
"start_time": 3840.964,
"text": " or the existing architecture libraries work so well that people use this stuff for now and it creates a local bubble until somebody builds a new stack that is overtaking it. And I think this is all going to happen at some point. So I'm not that pessimistic about these effects. What I can see is that our computers can read text at a rate that is impossible for human beings. When you parse the data into a large language model for training it."
},
{
"end_time": 3889.309,
"index": 162,
"start_time": 3869.65,
"text": " With this paradigm, it gets to be coherent in the limit. It's an interesting question. Maybe this paradigm is not correct."
},
{
"end_time": 3919.002,
"index": 163,
"start_time": 3889.684,
"text": " Maybe humans are doing something different. Maybe humans are maximizing coherence or consistency, and we have a slightly different formal definition. Life on Earth or agency in the universe might be minimizing free energy in the limit, but individual organisms are not able to figure that out. They do something that is only approximating it, but locally works much better and converges much faster. Maybe there are different loss functions that we have yet to discover that are more biological or more similar to biological systems."
},
{
"end_time": 3947.705,
"index": 164,
"start_time": 3919.582,
"text": " Also, one of the issues with biomimetic things is it means mostly mimicking the things that scientists in biology and neuroscience have discovered so far. And this stuff all doesn't work. The reason why Mike Levin doesn't call himself a neuroscientist, I suspect, but a synthetic biologist is that he doesn't want to get in conflict with the dogmatic approaches of some neuroscience, which believes that computation stops at the neurons."
},
{
"end_time": 3950.503,
"index": 165,
"start_time": 3948.114,
"text": " It's only neurons that are involved in computing things."
},
{
"end_time": 3980.111,
"index": 166,
"start_time": 3951.032,
"text": " It could be when you look at brains that they are basically telegraph networks of an organism, that the neuron is a telegraph cell. It's not unique in its ability to perform computation. It's only unique in its ability to send the results of computation using some kind of Morse code over long distances in the organism. And when you want to understand how organisms compute and you only look at neurons, it might be looking at the economy about 1900 and trying to understand it by only modeling the telegraph network."
},
{
"end_time": 3996.271,
"index": 167,
"start_time": 3980.435,
"text": " But you're going to learn fascinating things by looking at an economy, looking at its telegraph network and looking at the Morse code, but thinking that communication can only happen in this Morse code rather than sending RNA molecules to your neighbors. Why would you want to send spike trains if you can send strings?"
},
{
"end_time": 4015.725,
"index": 168,
"start_time": 3996.988,
"text": " Why would you want to perform such computations in a slow, awkward way? Why would you want to translate information into the time domain if you can send it in parallel all at once? So when we talk about biomimetic, we often talk about emulating things that we only partially understand and that don't actually work in a simulation."
},
{
"end_time": 4040.043,
"index": 169,
"start_time": 4015.725,
"text": " There is no working contact home right now that you can turn into a computer simulation, and that actually does what it's doing. And it's not because computers don't have the power to run the ideas that neuroscientists have developed, but neuroscientists don't have developed ideas that actually work. It's not that neuroscientists are stupid or their ideas are not promising. They're just incomplete at this point. We don't have complete models of brains that would work in AI."
},
{
"end_time": 4059.104,
"index": 170,
"start_time": 4040.418,
"text": " The reason why AI has to reinvent things from scratch is because it takes an engineering perspective. It thinks about what would nature have to do in order to approximate this kind of function and what's the most straightforward way to implement this and test this theory. This is this experimental engineering perspective that I suspect we might also need in neuroscience."
},
{
"end_time": 4073.49,
"index": 171,
"start_time": 4059.377,
"text": " not in the sense that we translate things into von Neumann architecture in neuroscience, but in the sense that we think about what would nature have to do in order to implement the necessary mathematics to model reality."
},
{
"end_time": 4102.159,
"index": 172,
"start_time": 4073.916,
"text": " I largely agree entirely with many of those things. I'm just trying to remember the ones that I can argue with. I love this notion that there's more money going into Hollywood films about AI than actually AI research. I've never heard that before. That's marvelous."
},
{
"end_time": 4114.94,
"index": 173,
"start_time": 4102.671,
"text": " and also the point about sort of GPUs. I think that's just a reflection of the natural selection in the AI community of what I was trying to say before about"
},
{
"end_time": 4135.93,
"index": 174,
"start_time": 4115.162,
"text": " the move away from von Neumann architectures to more mortal computing. If you talk to people doing in-memory processing or processing in memory as computer science, that's where they'd like everybody to be and that's what I meant really by"
},
{
"end_time": 4158.285,
"index": 175,
"start_time": 4135.93,
"text": " that aspect of mortal computing that the infrastructure and the buses and the message passing having everything local is speaking to the hardware implementation. I agree entirely that that is the direction of travel and I didn't want to imply that"
},
{
"end_time": 4189.514,
"index": 176,
"start_time": 4159.872,
"text": " sort of GPUs were the wrong way of doing it. Also, I agree, I wasn't really referring to transformer architectures and as you say, they're just highly expressive, very beautiful Bayesian filters and are now currently being understood as such. As my friend Chris Buckley would say, people are starting now to Bayes splayed how a transformer works."
},
{
"end_time": 4206.544,
"index": 177,
"start_time": 4190.538,
"text": " What would I disagree with? I noticed that you on a number of occasions were trying to identify the universal objective function, doing things better."
},
{
"end_time": 4233.814,
"index": 178,
"start_time": 4208.541,
"text": " Well, I think that ultimately utility relates to what makes the system stable and self-sustaining."
},
{
"end_time": 4262.159,
"index": 179,
"start_time": 4235.043,
"text": " So if you look at any kind of agent, it depends on what conditions can stabilize that agent. And this comes down to very much the way in which you model reality, I think. So it is about minimizing free energy in a way. But if you look at our own lives and we look for a sandwich or for love or for a relationship or for having the right partner to have children with and so on, we're not thinking very much about minimizing free energy."
},
{
"end_time": 4289.206,
"index": 180,
"start_time": 4262.944,
"text": " And we perform very local functions because we are only partial agents in a much larger system that you could understand as the project of the cell or as the project of building complexity to survive against the increasing entropy in the universe. And so basically we need to find sources of entropy and exploit them in a way that we can. And this depends on the agent that we currently are."
},
{
"end_time": 4316.237,
"index": 181,
"start_time": 4289.974,
"text": " This narrows down this broader notion of the search for free energy into more practical and applicable and narrow things that can deviate locally very much from this pure, beautiful idea. With respect to principle that should be discovered or has to be discovered and might be discovered in the context of AI, I suspect that self-organizing systems need different algorithms."
},
{
"end_time": 4339.428,
"index": 182,
"start_time": 4316.391,
"text": " the GPUs that we're currently using for learning because we cannot impose this global structure on them. So I suspect that there is a training algorithm that nature has discovered that is in plain sight and that we typically don't look at and that's consciousness. I suspect the reason why every human being is conscious and no human being is able to learn something without being conscious."
},
{
"end_time": 4367.483,
"index": 183,
"start_time": 4340.145,
"text": " It's not producing complex behavior without being conscious. It's not so much because consciousness is super unique to humans and evolved at the pinnacle of evolution and got bestowed on us and us alone. We do not become conscious after the PhD. We become conscious before we can drag a finger. So I suspect that consciousness itself is an aspect or depending on how you define the term consciousness, the core of a meta-learning algorithm."
},
{
"end_time": 4395.52,
"index": 184,
"start_time": 4368.114,
"text": " that allows the self-organization of information processing systems in nature. And it's a pretty radical notion. It's a conjecture at this point. I don't know whether that's true. But this idea that you have is a function that perceives itself in the act of perceiving. It's not conceptual. It's not cognitive. It's a precognitive level at the perceptual level where you notice that you're noticing, but you don't have a concept of notion yet."
},
{
"end_time": 4425.674,
"index": 185,
"start_time": 4396.561,
"text": " And from this simple loop that keeps itself stable and is controlling itself to remain stable and remain observer, where the observer is constituting itself an observer. You build all the other functionality in your mind. You start imposing a general language on your substrate, a protocol that is distributed with words so the humans become trainable and learn to speak the same language, behave in the same way that every part of the mind is able to talk to all the other parts of the mind. And you can impose an organization that"
},
{
"end_time": 4453.131,
"index": 186,
"start_time": 4426.067,
"text": " removes inconsistencies. This is probably that thing that is one of the big differences between how biological systems learn and control the world and how artificial engineered systems do it. I agree entirely. You've brought so many bright and interesting ideas. It's difficult to know what to comment upon."
},
{
"end_time": 4477.449,
"index": 187,
"start_time": 4453.507,
"text": " Just one thing which you said, when I pressed you on what is good, you basically said to survive. So I think that brings us again back to this notion of mortality being at the end of the day, the possibility of eluding mortality being part of… But not to survive as an individual, right? Human beings are built in such a way that we have to be mortal."
},
{
"end_time": 4504.701,
"index": 188,
"start_time": 4478.012,
"text": " We are not designs that can adapt to changing circumstances. If the atmosphere changes, if our food supply changes too much, we need to build a different organism. We need to have children that mutate and get selected for these new circumstances. But in principle, intelligent design would be possible. It's just not possible with the present architecture because our minds are not complex enough to understand the information processing of the cell well enough to redesign the cell in situ."
},
{
"end_time": 4532.09,
"index": 189,
"start_time": 4506.067,
"text": " And in principle, that's not something that would be impossible. It's just outside of the scope of biological minds so far. Right. Well, individually, we have to be mortal. But it's in principle, the cell can be immortal or there could be systems that go beyond the cell that encompass it, that are a superset of what the cell is doing and what other information processing agents could be doing in nature that basically makes sustainability happen."
},
{
"end_time": 4556.22,
"index": 190,
"start_time": 4532.381,
"text": " And I think sustainability is a better notion in some sense than immortality. So, yeah, again, I agree entirely. I often look at the physics of self-organization as just a description of those things that have been successful in sustaining themselves. And indeed, the free entry principle is just basically what would that look like and how would you write that down?"
},
{
"end_time": 4585.64,
"index": 191,
"start_time": 4556.715,
"text": " And of course, the three energy theorists would argue that the ultimate, the only objective function is a measure of that sustainability that is the evidence that you're in your characteristics, ascendable states. So if properly deployed, you should be able to explain all of those aspects of behavior that characterize you and me in terms of self-evidencing or free energy minimization, such as choosing the right partner"
},
{
"end_time": 4603.404,
"index": 192,
"start_time": 4585.879,
"text": " such as"
},
{
"end_time": 4634.599,
"index": 193,
"start_time": 4604.753,
"text": " Only understandable in relation to some kind of selfhood and I'm using selfhood in the way I think you're using this basic notion of sentience. What would that mean from the point of view of the free energy principle? It would mean basically that you have an existential imperative to be curious. If you just read the"
},
{
"end_time": 4661.271,
"index": 194,
"start_time": 4636.22,
"text": " If I am choosing how to act next, then I'm going to choose those actions that minimize my expected surprise or resolve my uncertainty. I'm going to act as if I'm a curious thing and I bring that to the table because that is what is not an aspect of any of this"
},
{
"end_time": 4691.459,
"index": 195,
"start_time": 4661.476,
"text": " Artificial intelligence that you described before the machine that can translate from one language to another language the machine that can Map from some natural text to a beautiful graphic These are wonderful and beautiful creations and they are extremely entertaining but they are not curious and As such they do not comply with the free energy principle, which means that they're not sustainable"
},
{
"end_time": 4715.811,
"index": 196,
"start_time": 4692.039,
"text": " which means that one has to ask what's going to happen to them. Perhaps we might sustain them in the way that we do good art, but from the point of view of that kind, perhaps I shouldn't use the word biomimetic because perhaps that's too loaded, but the way of sustaining oneself through self-evidencing"
},
{
"end_time": 4745.776,
"index": 197,
"start_time": 4716.084,
"text": " I do not think does admit an intelligent design of something that is not of itself curious as part of itself organization. So where would you see curiosity as part? Does the FDA have to be curious? Is there any aspect of the utility afforded by say reinforcement learning models or deep RL or Bayesian RL? Does that have curiosity under the hood as part of the objective function?"
},
{
"end_time": 4774.804,
"index": 198,
"start_time": 4747.261,
"text": " I really liked how you bring art into this discussion as an example of something that might be similar to an AI system that doesn't know what it's good for and only exists because we sustain it, because it's not self-sustaining. ChetGPT is not paying its own energy bills. It doesn't really care about them. It's just a system that is completing text at this point. And it might be if you task it with this thing and it figures out the mathematics at some point, but right now it doesn't."
},
{
"end_time": 4801.425,
"index": 199,
"start_time": 4775.213,
"text": " and an artist. I sometimes joke it's a system that has fallen in love with the shape of the loss function rather than with what you can achieve. Art is about capturing conscious states because they are intrinsically important. Is this art or can this be thrown away? It is art. It is important. And in this sense, art is the cuckoo child of life. It's not life itself."
},
{
"end_time": 4827.21,
"index": 200,
"start_time": 4802.193,
"text": " The artists are elves. The living organisms are orcs. They only use art for status signaling or for education or for ornamentation. The artist is the one who thinks magic is important, building palaces in our minds, showing them to each other. That's what we do. I'm much more an artist at heart than I am a practical human being that maximizes utility and survival."
},
{
"end_time": 4856.254,
"index": 201,
"start_time": 4827.722,
"text": " But I think I also can see that this is an incomplete perspective. It means that I'm identifying with a part of my mind, with the part of my mind that loves to observe and revamp the aesthetics of what I observe. I also realize that this is useful to society because it's basically able to hold down a particular corner of the larger hive mind that is necessary to be done. If I was somebody who would only maximize utility, it would be a great CEO, maybe."
},
{
"end_time": 4881.305,
"index": 202,
"start_time": 4856.578,
"text": " but I would not be somebody who is able to tie down different parts of philosophy and see what I can see by combining them or by looking at them through a threat lens. And so it's sometimes okay that we pursue things without fully understanding what they're good for if we are part of a larger system that does that. And our own mind is made out of lots of sub behaviors that individually do not know what we are about."
},
{
"end_time": 4905.811,
"index": 203,
"start_time": 4881.852,
"text": " and only together they complete each other to the point where we become a system that actually understands the purpose of our own existence in the world to some degree. And of course, that also goes across people. Individually, we are incomplete. And the reason why we have relationships to other people is because they complete us. And this incompleteness that we have individually is not just inadequacy, it's specialization."
},
{
"end_time": 4928.933,
"index": 204,
"start_time": 4906.357,
"text": " The more difficulty we have to find a place in the world, the more incomplete we are. But it often also means we have more potential to do something in this area of specialization that we are in. And individually, it might be harder to find that right specialization, but to accept that individual minds are incomplete in the way in which they're implemented in biology, I think is an important insight."
},
{
"end_time": 4956.8,
"index": 205,
"start_time": 4929.343,
"text": " And this doesn't have to be the case for an AI agent, of course, or for a God-like agent that holds down every fort, that is able to look at the world from every angle, that holds all perspectives simultaneously. Carl, did that answer your question about the curiosity of the FDA? Yes, and brings in the sort of primacy of the observer. So I've been intrigued by this notion of being incomplete."
},
{
"end_time": 4986.613,
"index": 206,
"start_time": 4957.79,
"text": " Do you want to unpack that a little bit? Yes. First of all, Kurt, thanks for pointing out that I didn't talk about curiosity. Curiosity ties into this problem of exploration versus exploitation. The point of curiosity is to explore the unknown, to resolve uncertainties, to discover possibilities of what could also be and what we could also be doing. And this is in competition to executing on what we already know."
},
{
"end_time": 5016.118,
"index": 207,
"start_time": 4987.312,
"text": " And there is, if you are in an unknown environment, it's unclear how much curiosity you should have, or if you're in a partially known environment. And nature seems to be solving this with diversity. So you have agents that are more curious and you have agents that are less curious. And depending on the current environment and niche, they are going to be adaptive or non-adaptive and being selected for or against. So I do think, of course, curiosity is super important, but it's also what kills the cat, right?"
},
{
"end_time": 5040.657,
"index": 208,
"start_time": 5016.834,
"text": " The early worm is the one that gets eaten by the bird. And so curiosity is important. It's a good thing that we are curious and it's very important that some of us are curious and retain this curiosity so we can move and change and adapt. And it's one of the most important properties in a mind that I value, that it's curious and always open to interaction and discovering ways to grow and become something else."
},
{
"end_time": 5050.862,
"index": 209,
"start_time": 5041.51,
"text": " But it's risky to be too curious and instead not just exploiting what you already know and act on that and look for the simple solution for your problems."
},
{
"end_time": 5079.753,
"index": 210,
"start_time": 5051.527,
"text": " I think it's a big problem in science that we drive out curiosity of people. The first step in thinking is curiosity, conjecture, trying things that may not work. And then you contract and the PhD seems to be a great filter that drives out the curiosity out of people. And then after that, they're able to only solve problems using given methods and they can do this to themselves. This is a violation of your curious mind. But as the existential question somehow stop after graduation."
},
{
"end_time": 5107.705,
"index": 211,
"start_time": 5080.247,
"text": " So it seems to be some selection function against thinking that is happening, that is largely driving curiosity out of people because they feel they can no longer afford it between grant proposals. And so in a sense, yes, I would like to express how much I cherish curiosity and its importance while pointing at the reason why not everybody is curious all the time and too much of a good thing is also bad. Right. And the incompleteness now."
},
{
"end_time": 5125.401,
"index": 212,
"start_time": 5108.985,
"text": " Would it be possible for you to expand"
},
{
"end_time": 5155.538,
"index": 213,
"start_time": 5125.845,
"text": " on the early worm gets eaten by the bird because the phrase is that the early bird gets the worm. But that doesn't imply that the early worm gets eaten by the bird because they could have different overlapping schedules. And in fact, it could be the late worm that gets eaten. And there is such a thing as a first mover advantage. AltaVista got eaten by Google because instead of giving people the search results it wanted, it gave them ads. And now Google has discovered that it's much better to be AltaVista. But AltaVista got eaten by Google. It was too early."
},
{
"end_time": 5181.698,
"index": 214,
"start_time": 5157.005,
"text": " Google has now given up on search. It instead believes in just giving you a mixture of ads that rhyme on your search term. You could say that AltaVista was the early worm just trying to do a job on my frustration with Google, but I think that very often we find that the earliest attempts to do something cannot survive because the environment is not there yet. The pioneers are usually sacrificial."
},
{
"end_time": 5208.541,
"index": 215,
"start_time": 5182.688,
"text": " There is glory in being a pioneer. There is no glory in copying what worked for the pioneer, but there is very little upside in greatness. Understood, Carl. Greatness, which is not good. Greatness is not good. We come back to the tumor again. The art of good management."
},
{
"end_time": 5237.278,
"index": 216,
"start_time": 5209.582,
"text": " riffing on your focus on art and just thinking what makes a good CEO? Is he somebody who makes lots of money and is utilitarian or does he have the art of good management and considers the objective function, the sustainability of his institution and her institution and all the people that work for it? I think there are very different perspectives on what this objective function should be and I was trying to argue before that it's not"
},
{
"end_time": 5251.954,
"index": 217,
"start_time": 5237.278,
"text": " It can't be measured in terms of greatness or money or utility. It can only be measured in terms of sustainability. The other thing I like was curiosity. So here's my little take on that. Curiosity killed a cat."
},
{
"end_time": 5270.725,
"index": 218,
"start_time": 5252.329,
"text": " I think that is exactly what was being implied by the importance of mortal computation and that in a sense we all die as a result of being curious after a sufficient amount of time and it can be no other way. I mean that in a very technical sense."
},
{
"end_time": 5297.363,
"index": 219,
"start_time": 5271.425,
"text": " So if you were talking to aficionados and active inference, an application of the free energy principle, what they would say is that in acting, in dissolving the exploration exploitation dilemma, you have to put curiosity as an essential part of the right objective function that underwrites our decisions about choices and our actions, simply in the sense that the"
},
{
"end_time": 5320.725,
"index": 220,
"start_time": 5297.585,
"text": " The expected surprise or the expected log evidence or self-information can always be written down as expected information gain and your expected utility or negative cost, which means that just the statistics of self-organization"
},
{
"end_time": 5348.507,
"index": 221,
"start_time": 5320.862,
"text": " Bake in curiosity in the sense that you will choose those actions that resolve uncertainty. You choose those actions that have the greatest information gain. So curiosity, I think, is a necessary part of existing. There's certainly things that exist in a sustainable sense. But my question was, I want to know more about this intriguing notion that we are incomplete"
},
{
"end_time": 5373.029,
"index": 222,
"start_time": 5348.899,
"text": " and less considered in the context of other things like us that constitute our lived or at least sensed world. But I just wanted to also ask do you see curiosity as being necessary for that kind of consciousness that you associated with sentience before? Would it be possible to be conscious"
},
{
"end_time": 5402.534,
"index": 223,
"start_time": 5373.353,
"text": " Without being curious, acknowledging there are lots of things that are not curious, you know, viruses, I suspect, are not curious. Trees are probably not that curious. They don't plan their actions to resolve uncertainty. But there are certain things that are curious, things like you and me. So I'm just wondering whether there is some, there are different kinds of things, some of which are more elaborate in terms of the kind of self-evidencing that they"
},
{
"end_time": 5427.432,
"index": 224,
"start_time": 5403.114,
"text": " I think that a good team should also contain curiosity maximizers"
},
{
"end_time": 5450.64,
"index": 225,
"start_time": 5428.285,
"text": " People that mostly are driven by curiosity. And so you have a voice in your team. And I love being that voice that is driven by finding out what could be. And you also need people who focus on execution and who are not curious at all. And in this way, I think we can be productively incomplete."
},
{
"end_time": 5481.237,
"index": 226,
"start_time": 5451.852,
"text": " If you have somebody who is by nature not very curious, but is able to accept the value of somebody who is and vice versa, we can become specialists at being curious or at execution. And when we can inform and advise each other, we can be much better than we could be individually if we would try to do all those things simultaneously. And in this sense, I believe that if you are a state building species, you do benefit from this kind of diversity."
},
{
"end_time": 5508.336,
"index": 227,
"start_time": 5481.749,
"text": " If you're not an individual agent that has to do all the things simultaneously. I don't know how curious trees are. I'm somewhat agnostic with respect to this. I suspect that they also need to reduce uncertainty. And I don't know how smart trees can become. When I look at means and motive of individual cells, they can exchange messages to their neighbors, right? They can also make this conditional evolution is probably getting them to the point where they can learn"
},
{
"end_time": 5533.712,
"index": 228,
"start_time": 5509.377,
"text": " So I don't see a way to stop a large multicellular organism that becomes old enough to become somewhat brain-like. But if it has neurons, it cannot send information quickly over long distances. So it will take a very long time compared to a brain or nervous system for a tree to become coherent about any observation. It takes so much time to synchronize this information back and forth that the tree would observe locally."
},
{
"end_time": 5559.155,
"index": 229,
"start_time": 5534.514,
"text": " And as a result, I would expect that the mental activities of the tree, if they exist, which I don't know, to play out at such slow time scales that it's very hard for us to observe. And so what does it look like if a tree was sentient? How would it look different from what we already observe and know? You notice that trees are communicating with other trees, that they sometimes kill plants around them, that they make decisions about that."
},
{
"end_time": 5582.824,
"index": 230,
"start_time": 5559.77,
"text": " We know that there are networks between fungi and trees that seem to be sending information over longer distances in forests. So trees can prepare an immune response to pests that invade the forest from one end while they're sitting on another end. And we observe all this, but we don't really think about the implication. What is the limitation of the sentience of a forest? I don't know what that is."
},
{
"end_time": 5607.585,
"index": 231,
"start_time": 5583.353,
"text": " And I'm really undecided about it, but I don't see a way to instantly dismiss the idea that trees could be quite curious and could actually at some level reason about the world, but probably because they're so slow that the individual tree doesn't get much smarter than a mouse because the amount of training data that the tree is able to process in its lifetime at a similar resolution is going to be much lower."
},
{
"end_time": 5625.179,
"index": 232,
"start_time": 5608.66,
"text": " They do live a long time. I have many friends who you would enjoy talking to about that and you seem very informed in that sphere. Our ancestors were convinced that trees could sink."
},
{
"end_time": 5638.865,
"index": 233,
"start_time": 5625.401,
"text": " Fairies are the spirits of trees and they move around in the forest using the internet of the forest that has emerged over many generations of plants that have learned to speak a shared protocol."
},
{
"end_time": 5669.531,
"index": 234,
"start_time": 5639.667,
"text": " And I think it's a very intriguing idea. We should at least consider it as a hypothesis. Absolutely. There was a great BBC series where they focus on the secret life of plants is by speeding up things 10 or 100 times. And they look very sentient when you do that. Our ancestors said that one day in fairyland is seven years in human land. Maybe this alludes to this temporal difference. So about differences between you all."
},
{
"end_time": 5694.462,
"index": 235,
"start_time": 5670.043,
"text": " Why don't we linger on consciousness? And Carl, if you don't mind answering, what is consciousness? Where is consciousness and why is consciousness? So in other words, where is it? Is it in the brain? Is it in the entire body? Is it an ill-defined question? What is it? Why do we have it? What is its function? And then we'll see where this compares and contrasts with Yoshi's thinking. Right."
},
{
"end_time": 5724.735,
"index": 236,
"start_time": 5695.486,
"text": " Yeah, I am not a philosopher and sometimes the story I will tell depends on what who I am talking to. But at its simplest, I find it easiest to think of consciousness as a process as opposed to a thing or a state and specifically a process of computation, if you like, or belief updating. So I normally start thinking about questions of the kind you just asked me."
},
{
"end_time": 5753.49,
"index": 237,
"start_time": 5724.855,
"text": " But replacing consciousness with evolution. So where is evolution? What is evolution? Why is evolution? Then all of those questions I think are quite easy to answer. Sometimes it's a stupid question. Sometimes there's a very clear answer. So where is consciousness? Where is evolution? Well, it is in the substrate that is evolving. So where is consciousness? It will be in the"
},
{
"end_time": 5768.985,
"index": 238,
"start_time": 5753.763,
"text": " the information processing, the belief updating that you get at any level."
},
{
"end_time": 5789.65,
"index": 239,
"start_time": 5768.985,
"text": " Acknowledging Joshua's point that it doesn't have to be neurons. It could be micelle networks. It could be intercellular communication. It could be electrical filaments. There is a physical instantiation of a process that can be read as a kind of"
},
{
"end_time": 5811.664,
"index": 240,
"start_time": 5789.923,
"text": " Belief Updating or Processing. If I were allowed to read computation as that kind of process, then that would be, I think, where consciousness would be found. Would that be sufficient to ascribe"
},
{
"end_time": 5831.493,
"index": 241,
"start_time": 5811.92,
"text": " Consciousness to me or to something else. I suspect not. I think you'd have to go a little bit further and Suspect the church wants to articulate how much further but there will be a focus on self modeling so it's not just a process of inference. It's actually inference under"
},
{
"end_time": 5861.664,
"index": 242,
"start_time": 5831.869,
"text": " I put something else into the mix as well. To be conscious, I suspect in the way that you're talking about,"
},
{
"end_time": 5884.155,
"index": 243,
"start_time": 5861.954,
"text": " Means you have to be an agent and to be an agent means that you have to be able to act and I would say more than just acting, more than acting say for example in the way that plants will act to broadcast information that enables them to mount an immune response to parasites. They have the capacity to plan"
},
{
"end_time": 5902.432,
"index": 244,
"start_time": 5884.991,
"text": " We normally plan our day and the way that we spend our time gathering information, gathering evidence from models of the world."
},
{
"end_time": 5923.558,
"index": 245,
"start_time": 5902.756,
"text": " in a way that can only be described as looks as if it is curious. That's why I was so fixated on art and creativity and curiosity that Josh was talking about previously. I think that is probably a prerequisite for being conscious in the sense that Josh"
},
{
"end_time": 5952.654,
"index": 246,
"start_time": 5925.828,
"text": " May I ask you a clarifying question, Carl, about belief updating? So if consciousness is associated with belief updating, then let's say one is a computer, a classical computer, you get updated in discrete steps. Whereas the belief updating that I imagine you're referring to is something more fuzzy or continuous. So does that mean that the consciousness associated with the computer, if a computer could be conscious is of a different sort? How does that work?"
},
{
"end_time": 5978.251,
"index": 247,
"start_time": 5953.763,
"text": " I'm not sure. I don't think there's any, in the same spirit that we don't want to over commit to neurons doing mind work. I don't think we need to commit to a continuous or discrete space-time formulation. Again, that's an artificial divide between classical physics and quantum information theoretic approaches."
},
{
"end_time": 6007.329,
"index": 248,
"start_time": 5978.251,
"text": " I think the deep question is what properties must the computational process in a PC or a computer possess before you would be licensed to make the influence that it was conscious and possibly even ascribe self-consciousness to that. The way that I would articulate that would be that you have to be able to describe everything that is observable about that computing"
},
{
"end_time": 6036.92,
"index": 249,
"start_time": 6007.329,
"text": " as if or explain it in terms of it acting upon the world in a way that suggests or can be explained that it has a model of itself engaging with that world. And furthermore, I would say that that model has to involve the consequences of its action, which is what I meant by being an agent."
},
{
"end_time": 6063.763,
"index": 250,
"start_time": 6037.312,
"text": " So it has to have a model or act as if it has a model, a generative model that could be a minimal self kind of model but crucially entails the consequences of its own actions so that it can plan, so that it can evince curious like behavior. So that could be done in silica, it could be done with a sort of clock and"
},
{
"end_time": 6082.5,
"index": 251,
"start_time": 6063.985,
"text": " I think it's more the nature of the implicit model under the hood"
},
{
"end_time": 6111.135,
"index": 252,
"start_time": 6082.5,
"text": " that is accounting for its internal machinations, but more practically in terms of what I can observe at that computer, its behavior and the way that it goes and gathers information or attends to certain things and doesn't attend to other things. Okay, great. Josje. If we think about where consciousness is, we might be biased by our propensity to assign identity."
},
{
"end_time": 6140.333,
"index": 253,
"start_time": 6111.544,
"text": " to everything. And identity does not apply to law-like things. Gravity is not somewhere. Gravity is a law, for instance, or combustion is not anywhere. It's a law. It doesn't mean that it happens everywhere in the same way. It only happens when the conditions for the manifestation of the law are implemented. When they're realized in a certain region, then we can observe combustion happening. But combustion simply means that under certain conditions, you will get an exothermic reaction."
},
{
"end_time": 6153.592,
"index": 254,
"start_time": 6140.845,
"text": " And gravity means that under certain conditions, you will find that objects attract each other. And consciousness means that if you set up a system in a certain way, you will observe the following phenomena."
},
{
"end_time": 6179.48,
"index": 255,
"start_time": 6153.899,
"text": " Consciousness in this way is software states, representation state, and all software is not a thing. But the word processor that runs on your computer doesn't have an identity that would make it separate or the same as the word processor that runs on another person's computer because it's a law. It says if you put the transistors into this in this state, the following thing is going to happen. So a software engineer is discovering a law, a very, very specific law."
},
{
"end_time": 6193.814,
"index": 256,
"start_time": 6179.855,
"text": " that is tailored to a particular task and so on, but it's manifested whenever we create the preconditions for that law. And so the software design is about creating the preconditions for the manifestation of a law of text processing, for instance."
},
{
"end_time": 6220.196,
"index": 257,
"start_time": 6194.172,
"text": " that allows you to implement such a function in the universe or discover how it is implemented. But it's not because the software engineer builds it into existence and it didn't exist before that. That's not the case. It always would work. If somebody discovers this bit string in a random way and it's the same bit string implemented on the same architecture, it would still perform the same function. And in a sense, I think that consciousness is"
},
{
"end_time": 6250.026,
"index": 258,
"start_time": 6220.623,
"text": " not separate in different people. It's itself a mechanism, a principle that increases coherence in the mind. It's an operator that seems to be creating coherence. At least that's the way I would look at it or frame it. And as a result, it produces a sense of now, an island of coherence and the potential models that our mind could have. And I think it's responsible for this fact that we perceive ourselves being inhabitants of an island of coherence in the chaotic world."
},
{
"end_time": 6268.575,
"index": 259,
"start_time": 6250.282,
"text": " Now this island of known as and it's probably not the only solution for this thing. I think it's imaginable that there could be a hyper consciousness that allows you to see multiple possibilities simultaneously rather than just one as our consciousness does or that offers us a now that is not three seconds long, but hundreds of years long."
},
{
"end_time": 6298.319,
"index": 260,
"start_time": 6269.855,
"text": " In principle, that I think is conceivable. So maybe we will have systems at some point, but we already have them that have different consciousness like structures that fulfill a similar role of islands of coherence or intertidal regions of in the space of representations that allow you to act on the universe. But the way it seems to be implemented in myself, it's particularly in the brain, because if I disrupt my brain, my consciousness ceases."
},
{
"end_time": 6327.568,
"index": 261,
"start_time": 6298.712,
"text": " Whereas if I disrupt my body, it doesn't. This doesn't mean that there are no feedback loops that are bi-directional into my body or even outside of my body that are crucial for some functionality that I observe as a content in my consciousness. But if you want to make me unconscious, you need to clobber my brain in some sense, not nothing else. There's no other part of the universe that you can inhibit to make me unconscious. And that leads me to think that the way in which this law-like structure is implemented is right now."
},
{
"end_time": 6344.224,
"index": 262,
"start_time": 6327.961,
"text": " For the system that is talking to you on my neurons, on my brain, mostly. Okay. Any objections there, Carl? No, not at all. I was just trying to remember if Mark Sausage were here, he'd tell you exactly"
},
{
"end_time": 6361.852,
"index": 263,
"start_time": 6344.565,
"text": " the size of a really small region in the brain stem i think it's less than four cubic millimeters if you were bladed you would immediately lose consciousness like that so you don't know it's a very very specific part of your neural architecture"
},
{
"end_time": 6379.548,
"index": 264,
"start_time": 6361.852,
"text": " I'm"
},
{
"end_time": 6408.166,
"index": 265,
"start_time": 6379.991,
"text": " that enable a large scale functionality. In some sense, everything that would disrupt the formation of coherent patterns in my brain is sufficient to inhibit my consciousness. And there are probably many such bottlenecks that provide the vulnerability. So maybe the claustrum is crucial in providing some clock function that is crucial for the formation of feedback loops in the brain that give rise to the kind of patterns that we need. Maybe there are several other such bottlenecks."
},
{
"end_time": 6425.213,
"index": 266,
"start_time": 6408.695,
"text": " This doesn't mean that the functionality is exclusively implemented in this bottleneck. No, I didn't mean to imply that the pineal gland is this thing. I didn't think that you would, but I thought it would lead to a misunderstanding of the audience and I've heard famous neuroscientists point at such phenomena and say, oh, maybe this is where consciousness happens."
},
{
"end_time": 6445.725,
"index": 267,
"start_time": 6425.708,
"text": " So which neuroscientist has said this then?"
},
{
"end_time": 6475.879,
"index": 268,
"start_time": 6446.51,
"text": " Just to unpack, the reason that Mark would identify this is exactly the cells of origin that are broadcast everywhere that do induce exactly this coherence you were talking about. These are the ascending modulator neurotransmitter systems that are responsible for orchestrating that coherence that you were talking about. And I think that's very nice because, you know,"
},
{
"end_time": 6504.753,
"index": 269,
"start_time": 6476.186,
"text": " It also speaks to the ability of conscious-mimicking-like artifacts whose abilities to mimic consciousness-like behavior rests upon this modulatory attention-like mechanism. I'm thinking again of attention heads and transformers that play the same kind of role as the"
},
{
"end_time": 6534.821,
"index": 270,
"start_time": 6505.06,
"text": " the selection that these ascending neurotransmitter systems do. So if you find yourself in conversation with Mark Soames, he would argue that the feeling of consciousness arises from equipping certain coherent coordinated interactions that may be regulated by the Cerebellum or the claustrum, but it is that regulation that actually equips consciousness with the kind of qualitative feeling that leads in the way that Mark Soames addresses."
},
{
"end_time": 6555.52,
"index": 271,
"start_time": 6535.026,
"text": " but I mean just notice that just reviewing what Josh just said that he's talking about consciousness you're equipping us with a sense of now and having an explicit aspect that could be you know I'm thinking of not hustle but"
},
{
"end_time": 6573.268,
"index": 272,
"start_time": 6555.776,
"text": " We're talking about processes in time."
},
{
"end_time": 6601.186,
"index": 273,
"start_time": 6573.507,
"text": " At this instant, I am conscious, or consciousness is here. We're talking about a process that by definition has to unfold in time. I think that's an important observation which sometimes eludes, I think, people debating about conscious states and conscious content, not acknowledging that it is a process."
},
{
"end_time": 6624.735,
"index": 274,
"start_time": 6601.766,
"text": " I have an open question and maybe you have a reflection on this."
},
{
"end_time": 6648.592,
"index": 275,
"start_time": 6624.991,
"text": " When we think about our own consciousness, we cannot know in principle, I think, just by introspection, whether we have multiple consciousness in our own mind, because we can only remember those conscious states that find their way into an integrated protocol that you can access from where you stand. We know that there are some people which have a multiple personality disorder in which the protocol itself gets splintered."
},
{
"end_time": 6677.346,
"index": 276,
"start_time": 6649.565,
"text": " As a result, they don't dream to be just one person. They dream to be alternating, to be different people that usually don't remember each other because they don't have their shared protocol anymore. Now, my own emotion and perception is generated outside of my personal self. My personal self is downstream from them. I am subjected to my perception and emotion. I have involuntary reactions to them. But to produce my percepts and my emotion, my mind needs intelligence."
},
{
"end_time": 6704.735,
"index": 277,
"start_time": 6678.251,
"text": " It cannot be much more stupid than me. If my emotions would guide me in a way that is consistently more stupid than my reason and my reflection would be, I don't think I would work. So there is an interesting question. Is there a secondary consciousness? Is the part of your mind that generates world model and your self assessment, your alignment to the world itself conscious? So basically, do you share your brain with the second consciousness that has a separate protocol?"
},
{
"end_time": 6732.961,
"index": 278,
"start_time": 6705.247,
"text": " or is this a non-conscious process that is basically just dumb and doesn't know what it's doing in a sense that it's would be sentient in a way that's similar to my own sentence. What do you think? You should have a go on that one and then I'll I can think about it. Well, something I had wondered about 10 years ago or so, and I don't recall the exact argument was that if it was the case that the graph"
},
{
"end_time": 6762.568,
"index": 279,
"start_time": 6733.148,
"text": " in our brain, let's say that let's just reduce the neurons down to a graph, that this graph somehow produces consciousness or is the same as consciousness, then if you were to remove one of those nodes, then you would still have a somewhat the same identity. Okay, so then does that mean that we have pretty much an infinite amount of overlapping consciousnesses within our brain? I don't recall the exact argument, but it was similar to this. And then there's something related in philosophy called the binding problem. I'm uncertain what people who"
},
{
"end_time": 6784.787,
"index": 280,
"start_time": 6763.063,
"text": " Can I just then pursue that notion of the binding in the context of the kind of thing or the way I am at the moment? I think that's a very compelling notion."
},
{
"end_time": 6801.783,
"index": 281,
"start_time": 6785.435,
"text": " From the point of view of generative modeling, so I'm not answering now as a philosopher, but as somebody who may be tasked, for example, with building an artifact that would have a minimal kind of selfhood, the first thing you have to write down"
},
{
"end_time": 6824.684,
"index": 282,
"start_time": 6802.244,
"text": " is different states of mind so that I can be frightened, I can be embarrassed, I can be angry, I can be a father, I could be a football player. So all the different ways that I could be that are now conditioned upon the context in which I find myself."
},
{
"end_time": 6849.411,
"index": 283,
"start_time": 6824.906,
"text": " If that's part of the generative model, that then speaks to two things. First of all, you have to recognize what state of mind you are in, given all the evidence at hand. For example, if I want to jointly explain the racing heart that my interceptor cues are providing me in the intercepted domain,"
},
{
"end_time": 6879.258,
"index": 284,
"start_time": 6849.667,
"text": " with a stiffness of my muscles that my proprioception is equipping me with. To reconcile that with my visual exosusceptive input that I'm in a dark alley and mnemonically I've never been here before. All of this sensory evidence might be quite easily explained by the simple hypothesis I am frightened and that in turn generates"
},
{
"end_time": 6907.363,
"index": 285,
"start_time": 6879.787,
"text": " Covert or mental actions and possibly even overt autonomic actions and motor actions that Provide more evidence for the fact that I am frightened in the sense in a William James sense that you know, I have cardiac acceleration I will have a motor response a muscular response appropriate for a fright and flight flight response so just to actually be able to generate and recognize"
},
{
"end_time": 6923.353,
"index": 286,
"start_time": 6908.166,
"text": " emotional kinds of behavior. I would need to have a minimal kind of model that crucially obliged me now to disambiguate between a series of different ways of being."
},
{
"end_time": 6951.903,
"index": 287,
"start_time": 6923.746,
"text": " You know, so it's not so much, oh, I am me. That's a great hypothesis. That explains everything. But to make it operationally important, I have to actually infer I'm me in this kind of state of mind, this situation, or I'm me in this kind of situation and select the right state of mind to be in. And I think that that really does speak to this sort of notion of multiple consciousnesses that"
},
{
"end_time": 6981.647,
"index": 288,
"start_time": 6952.381,
"text": " I'm constantly seeking for the way in which I complete you in terms of dyadic interactions, which means I have to recognize what kind of person do you expect me to be in this setting?"
},
{
"end_time": 7002.978,
"index": 289,
"start_time": 6982.261,
"text": " And of course I can only do that if I actually have an internal model that is about me. It's a model that actually have this attribute of selfhood, but specifically selfhood appropriate to this context or this person or this situation. Does that make sense?"
},
{
"end_time": 7031.647,
"index": 290,
"start_time": 7004.019,
"text": " Yeah, I have a question about that. You said that you have different identities that you then select from to see which one's most appropriate for the circumstance, like a hypothesis. And is it the case then that you would say that there are multiple consciousnesses inside your brain? Or is it more like you have multiple potential consciousnesses, and then as soon as you select one, that makes it actual? Think Verizon, the best 5G network is expensive? Think again, bring in your AT&T or T-Mobile bill to a Verizon store"
},
{
"end_time": 7055.93,
"index": 291,
"start_time": 7035.794,
"text": " . . . . . . . . . ."
},
{
"end_time": 7080.589,
"index": 292,
"start_time": 7058.677,
"text": " I don't know though. I would imagine that you'd have to have another deeper layer of your generative model that then recognizes the selection process and indeed this may sound fanciful but there are naturalized in terms of inference schemes"
},
{
"end_time": 7108.541,
"index": 293,
"start_time": 7081.22,
"text": " Models of consciousness that actually do invoke, and I'm thinking here of the work of people like Lars Sandstedt-Smith, who explicitly have three levels, and each level, a deep journey model, very much like a deep neural network, and the role of each level is to provide the right attention heads or biasing or precision"
},
{
"end_time": 7138.37,
"index": 294,
"start_time": 7108.899,
"text": " So it may well be that to get the kind of self-awareness, if I now read awareness as deploying mental action in the service of setting the precision or the gating of various communications or processing lower down in the model, it may well be that you do need another layer of sophistication or depth to your generative models that I suspect trees don't have"
},
{
"end_time": 7164.582,
"index": 295,
"start_time": 7138.541,
"text": " But certainly you have, or I can infer that you have, given I'm assuming that I have a similar conception of consciousness. But I'm not sure that really speaks to your question or the one that Joshua was posing, that the unitary aspect of consciousness and does that transcend"
},
{
"end_time": 7195.247,
"index": 296,
"start_time": 7165.418,
"text": " an inference that would simply be biophysically instantiated in exactly the same way that I can register visual motion and motion sensitive area V5 in my posterior cortex. I don't know about that. I'll pass back to Josh on that one. Again, we need a very narrow definition, the very tight definition of consciousness to answer this question in a meaningful way."
},
{
"end_time": 7225.043,
"index": 297,
"start_time": 7196.698,
"text": " If we see consciousness as something that we vaguely gesture at, and there could be multiple things in our understanding, and it becomes almost impossible to say something meaningful about this. So, for instance, it is conceivable that consciousness would be implemented by a small set of circuits in the brain, and that all the different contents that can experience themselves as consciousness are repurposing this shared functionality."
},
{
"end_time": 7246.22,
"index": 298,
"start_time": 7225.418,
"text": " We probably have only one language center and this one language center can be used to articulate ideas. So many parts of our mind using different sub agents that basically interface with this. You can also clearly have multiple selves interacting on your mind. Your personal self is one possible self that you can have that represents you as a person."
},
{
"end_time": 7274.411,
"index": 299,
"start_time": 7246.8,
"text": " But there are some people which have God talking to them on their own mind. And I think what happens there is people implement a self that is existing and self identifying as existing across minds, something that is not a model of the interests of the individual person, but a model of a collective agent that is implemented using the actions of the individual people. But of course, this collective mind that assumes the voice of God and talks to you in your own mind so you can perceive it."
},
{
"end_time": 7295.06,
"index": 300,
"start_time": 7274.701,
"text": " Is still implemented on your own mind and uses your circuitry. It's just that your circuitry is not yours. Your brain doesn't belong to yourself. Yourself is a creation of your own mind that symbolizes this person. People who don't say that God doesn't exist, forget that often that they themselves themselves don't really exist in physics."
},
{
"end_time": 7324.497,
"index": 301,
"start_time": 7295.538,
"text": " This thing that I experienced as perceiving, as interacting with the world is a dream. It's a dream of what it would be like if you were a person that existed. It's virtual, right? So you can also dream being a God and this God might be so tightly implemented on your mind that it's able to use your language center and you hear its voice talking to you. But it's not more or less real than you hearing your own voice talking to you in your mind, right? It's just an implementation of a representation of agency in your mind."
},
{
"end_time": 7352.125,
"index": 302,
"start_time": 7325.657,
"text": " One crucial difference to the way in which most AI systems are implemented right now and the way in which agency is implemented on our minds is that we usually write functions in AI that perform something like a hundred steps in a neural network, for instance, and then gives a result that makes the programmer happy. And this is it. And the time series predictions in our own mind are dynamic. They're not meant to solve a particular function."
},
{
"end_time": 7382.022,
"index": 303,
"start_time": 7352.517,
"text": " but they're meant to track reality. So in a sense, our brain is more like a very complex resonator that tries to go into resonance with the world. So it creates a harmonic pattern that continuously tracks your sensory data with the minimal amount of effort. And this perspective is very different. Really means your perception of the world cannot afford to deviate too much in its dynamics from the dynamics that you observe in your sensory apparatus, because otherwise future predictions become harder. You get out of sync."
},
{
"end_time": 7410.486,
"index": 304,
"start_time": 7382.381,
"text": " You always try to stay in sync with the world. And this thing that you stay in sync is really crucial for the way in which we experience ourselves with the world. As part of staying in sync, we discover our own self is the missing link between volition and the outcomes of our action. Our body would not be discoverable to us and is not immediately given to us if we wouldn't have this loop that ties us into the outer universe and into the stuff that we cannot control directly."
},
{
"end_time": 7439.787,
"index": 305,
"start_time": 7411.135,
"text": " And for me, this question relates to, do we have only one consciousness? It occurs to me that we would not know if we have multiple ones, if they don't share memories. If I were to set up an AI architecture, where a part of the AI architecture is a model of an agent in the world. Another part of the AI architecture is a model of the infrastructure that I need to maintain to make a model of the world and such an agent in the world."
},
{
"end_time": 7470.196,
"index": 306,
"start_time": 7440.401,
"text": " I would not tell the agent how this infrastructure works, because the agent might use that knowledge to game the architecture and get a better outcome for itself, not the organism. Imagine you could game your perception so you're always happy, no matter how much you're failing in the world. From the perspective of the larger architecture, that's not desirable. So it would probably remain hidden from you how you implement it. And to me, the question is interesting. How sentient is this part of you that is not yourself?"
},
{
"end_time": 7498.899,
"index": 307,
"start_time": 7471.459,
"text": " Does it actually know what it is in real time? I think it's a very interesting and tempting philosophical question and also a practical one. Maybe there is a neuroscientific experiment that would figure out if you have two clusters of conscious experience. I don't know, wouldn't know how to measure this, but right, maybe IIT Global Workspace Theory and so on are more interesting ways than we currently think they are."
},
{
"end_time": 7518.814,
"index": 308,
"start_time": 7500.418,
"text": " Because they assume that there is just one consciousness. Of course, from the perspective of one consciousness, there is only one, because consciousness is in some sense, by definition, what's unified. But if there are multiple clusters of unification that exist simultaneously, they wouldn't know each other directly. They could maybe observe each other, but maybe not in both directions."
},
{
"end_time": 7533.37,
"index": 309,
"start_time": 7519.462,
"text": " Sorry, when you say consciousness is by definition one, is that akin to how you say software is one software as such, but specific instantiations of functional weight. So basically the most more like the universe is by definition only one."
},
{
"end_time": 7564.002,
"index": 310,
"start_time": 7534.889,
"text": " But you can have multiple universes, but this means that we define universe in a particular way. Normally, universe is used in the way of everything that feeds back information into a unified way, into a unified thing. We accept that parts of the universe get lost if they go outside of the distance where they can feed information back into you. But there's still in the way in which we think about the universe part of the universe. The universe is everything that exists. And consciousness is everything that you can be conscious of in the sense."
},
{
"end_time": 7591.425,
"index": 311,
"start_time": 7564.718,
"text": " Right. So if there is stuff in you that you're not conscious of, it doesn't mean that it's not conscious. It would just be a separate consciousness, possibly. It could also be that it's not a consciousness. And so what I don't know is, is the brain structured in such a way that can maintain only one consciousness at a time? Or could there be multiple full on consciousnesses that just don't, where we don't know about the other one? I perceive my consciousness as being focused on this content that is my personal self."
},
{
"end_time": 7620.247,
"index": 312,
"start_time": 7592.398,
"text": " I can have conscious states in which I'm not a personal self. For instance, I can dream at night that there's stuff happening, and I'm conscious of that stuff happening, but there is no I. There is no personal self. There's just this reflexive attention that is interacting with the perceptual world. In that state, I would say I can clearly show that consciousness can exist without a personal self, and the personal self is just a content. But it doesn't answer the question, are there multiple consciousnesses?"
},
{
"end_time": 7644.377,
"index": 313,
"start_time": 7620.981,
"text": " Interacting on my brain one that is maintaining my reward system and motivational system and my perception and one that is maintaining my personal self. Carl now that we've spoken about the unity of consciousness dissociation as well as even voices of God and God him or herself or itself. What does your background in schizophrenia your perspective from there have to say?"
},
{
"end_time": 7672.773,
"index": 314,
"start_time": 7645.282,
"text": " Yeah, well, that's a brilliant question and a leading question. It's what I wanted to comment on. So again, so many things have been have been unearthed here from the basic that all our beliefs, our fantasies, their hypotheses, illusions that are entrained by"
},
{
"end_time": 7700.145,
"index": 315,
"start_time": 7673.302,
"text": " the sensorium in a way that maintains some kind of synchrony between the inside and the outside. I think that's quite a fundamental thing which we haven't spoken about very much but I just want to fully endorse and of course that entrainment sometimes referred to I think as entrained hallucination, perception being hallucination that's just been entrained by sparse data."
},
{
"end_time": 7728.916,
"index": 316,
"start_time": 7700.742,
"text": " but the data themselves being actively sampled. So this loop that Josje was referring to, I think is absolutely crucial aspect of the whole sense making and indeed sense making as a self, as a cause of my own or the author of my own sensations in an active sensing or active inference. I think that's absolutely crucial. The question about are the multiple consciousnesses, I should just"
},
{
"end_time": 7754.906,
"index": 317,
"start_time": 7729.155,
"text": " Before addressing the psychiatric perspective, I have a group of colleagues including people like Maxwell Ramsted and Chris Fields and particularly Chris Fields who takes the quantum information theoretic view of this and brings to the table the notion of an irreducible Markov blanket in a computing graph that crucially has some unique properties that means that"
},
{
"end_time": 7775.094,
"index": 318,
"start_time": 7755.35,
"text": " It can only know of itself by acting on the outside or which other parts of the brain and again acting in this instance just means setting the attention or the coordination or contextualizing message passing elsewhere but the interesting notion"
},
{
"end_time": 7804.411,
"index": 319,
"start_time": 7775.555,
"text": " which is not unrelated to the pineal gland or Mark Soames' ascending neurotransmitter systems that might do this kind of action, is that there could be more than one minimal or irreducible Markov blanket that practically you can actually experimentally define in principle by looking at the connectivity of any kind. But certainly if you have sufficiently"
},
{
"end_time": 7821.374,
"index": 320,
"start_time": 7804.735,
"text": " Detailed Connectome"
},
{
"end_time": 7839.411,
"index": 321,
"start_time": 7821.647,
"text": " to actually identify candidates for irreducible Markov blankets that could be the thing that looks at the thing that's doing the thing that may have different kinds of experiences. There could be an irreducible Markov blanket"
},
{
"end_time": 7862.637,
"index": 322,
"start_time": 7840.486,
"text": " in say the Globus Pallidus that might be making sense of and acting upon the machinery that underwrites our motor behavior and our plans and our choices as opposed to something in the occipital lobe that might be more to do with perception. So I'm just saying that I don't think it's a silly question to ask. Can we empirically identify"
},
{
"end_time": 7890.452,
"index": 323,
"start_time": 7862.892,
"text": " candidates in computing architectures that would have the right kind of attributes that would be necessary to ascribe them some minimal kind of consciousness. But let me return to this key question about schizophrenia because as Joshua was talking, it did strike me. Yes, that's exactly what goes wrong in schizophrenia. Attribution of agency."
},
{
"end_time": 7915.623,
"index": 324,
"start_time": 7890.452,
"text": " delusions of control hearing your hearing voice again coming back to this notion that you know this action perception loop this this circular coupling to the world that rests upon action that has an agent and the consciousness understood as self modeling is all about"
},
{
"end_time": 7941.852,
"index": 325,
"start_time": 7915.828,
"text": " Ascribing the right agency to the outcomes of action I think is a really important notion here and it can go horribly wrong. We spend the first years of our lives working out how I cause that and you cause that and working out what I can cause and what I can't cause and what Mum causes and what other people cause. Imagine that you lost that capacity. Imagine that when you spoke"
},
{
"end_time": 7958.387,
"index": 326,
"start_time": 7942.346,
"text": " This is Chris Frith's notion or expression for auditory hallucinations, for example. You won't help to recognize that it was you that was the initiation of that speech act, whether it's actually articulated or subvocal. So just not being able to infer"
},
{
"end_time": 7986.63,
"index": 327,
"start_time": 7958.746,
"text": " Selfhood in the sense of ascribing agency to the concept the sensed consequences of action would be quite devastating. And of course you can think about reproducing these kinds of states with certain psychomimetic or psychedelic drugs. They really dissolve what we take for granted in terms of a coherent unitary content of consciousness. If you've ever had the synesthesia"
},
{
"end_time": 8002.022,
"index": 328,
"start_time": 7987.022,
"text": " the fact that color is seen and sound is heard"
},
{
"end_time": 8030.998,
"index": 329,
"start_time": 8002.415,
"text": " It doesn't have to be like that. You know, it's just that if we as sustained, inferring processes, self-evidence in computing processes that sustain in a coherent way our sense making, it looks as if colors are seen and sounds are heard. That's how that's how we make sense. It doesn't have to be like that. And you can experience the converse. You can start to see sounds. You can hear colors. You can have"
},
{
"end_time": 8043.968,
"index": 330,
"start_time": 8031.271,
"text": " A moment can actually feel as if you've nested."
},
{
"end_time": 8070.282,
"index": 331,
"start_time": 8044.394,
"text": " that you're given the right either psychopathology or pathophysiology, technically a synaptopathy of the kind you might associate with things like Parkinson's disease and schizophrenia, possibly even neurotic disorders, absolutely effective or depressive or generalized anxiety disorders can all be understood as basically a disintegration of this coherent"
},
{
"end_time": 8095.435,
"index": 332,
"start_time": 8070.52,
"text": " Synthesis and, to use your word, the binding, which means that I think the same principles could also be ascribed to consciousness itself. So depersonalization and derealization are two conditions which I've never experienced, but my understanding of"
},
{
"end_time": 8125.316,
"index": 333,
"start_time": 8095.862,
"text": " There are depersonalization syndromes where you still sense, you still perceive, but it's not you."
},
{
"end_time": 8155.367,
"index": 334,
"start_time": 8126.903,
"text": " And there are some derealization syndromes where you are there, but all your sensorium is unreal. It's not actually there anymore. You're not actually in the world. So you can get these horrible disintegration dissociative. Well, dissociative is a political term. You can get these situations where everything we take for granted about the unitary aspect of our experienced world"
},
{
"end_time": 8185.606,
"index": 335,
"start_time": 8155.606,
"text": " I would distinguish between consciousness and self more closely than you seem to be doing just now. I would say that consciousness coincides with the ability to dream or it is the ability to dream even. And in schizophrenia, the dream is spinning off from"
},
{
"end_time": 8215.316,
"index": 336,
"start_time": 8186.169,
"text": " the tightly coupled model that allows you to track reality. And when we dream at night, we are dissociated from our sensorium and the brain is probably also dissociated in many other ways. And as a result, we get split off from the ability, for instance, to remember who we are, which city we live in, what our name is very often in a dream. Even if it's a lucid dream where we get some agency over the contents of our dream, we might not be able to reconstruct our normal personality and crucial aspects of our own self."
},
{
"end_time": 8241.425,
"index": 337,
"start_time": 8216.169,
"text": " And in schizophrenia, I think this happens while we are awake, which means we start to produce mental representations that look real to us, but that have no longer the property that they are predicting what's going to happen next in the world or much later. And this ability to lose predictive power doesn't mean that they are now"
},
{
"end_time": 8258.899,
"index": 338,
"start_time": 8242.159,
"text": " more of an illusion than before. The normal stuff that has predictive power is still hallucination. It's still the trance state when you perceive something as real, as long as you perceive it as real. It's only some trance states are useful in the sense that they have predictive power, that they're useful representations, and others are not."
},
{
"end_time": 8288.677,
"index": 339,
"start_time": 8259.343,
"text": " And the ability to wake up from this notion that your representations are real is what Michael Taft calls enlightenment. He's a meditation teacher who has a pretty rational approach to enlightenment. And basically to him, enlightenment is the state where you recognize all your mental representations as representations and become aware of their representational nature. You basically realize that nothing that you can perceive is real, because everything that you can perceive is a representational content."
},
{
"end_time": 8313.677,
"index": 340,
"start_time": 8288.933,
"text": " And it's something that is accessible to your introspection via introspection if you build the necessary models for doing that. So when your mind is getting to this model level where you can construct a representation of how you're representing things, then you get some agency of how you are interacting with your representation. But I wouldn't say that somebody who is experiencing"
},
{
"end_time": 8332.381,
"index": 341,
"start_time": 8314.548,
"text": " a schizophrenic episode and or a derealizes or depersonalizes is losing consciousness. They are losing the self. They're losing coherence. They're losing the ability to track reality and the interaction between self and external world and so on. But as long as they experience that happening, they're still conscious."
},
{
"end_time": 8352.892,
"index": 342,
"start_time": 8333.695,
"text": " Does this make sense?"
},
{
"end_time": 8380.964,
"index": 343,
"start_time": 8356.271,
"text": " We met a few times since I left Germany only, mostly online. And I like Thomas a lot. I think that he is one of the few German philosophers who is reading right now. But of course, he's limited by being a philosopher, which means he is going to the point where he stops before the point where he would make actually functional models that we could test."
},
{
"end_time": 8407.432,
"index": 344,
"start_time": 8381.186,
"text": " Right. So I think his concepts are sound. He does observe a lot of interesting things and I guess a lot of it also through introspection. But I think in order to understand consciousness, we actually need to build testable theories. And I suspect even if you cannot construct consciousness as this strange loop as Hofstadter calls it from scratch, which I don't know whether we can do that. I'm"
},
{
"end_time": 8435.299,
"index": 345,
"start_time": 8407.79,
"text": " I was going to make the joke that we've offended physicists, neuroscientists and philosophers. It's mostly retaliation because I'm so offended by them. Maybe I shouldn't."
},
{
"end_time": 8463.985,
"index": 346,
"start_time": 8435.589,
"text": " I tried to study all these things and I got so little out of it. I found that most of it is just pretence. There's so little honest thinking going on about the condition that we are in. It was very frustrating to me. What field do you identify as being a part of, Joscha? Computer scientist? I like computer science most because I've discovered as a student that you can publish in computer science at every stage of your career."
},
{
"end_time": 8491.51,
"index": 347,
"start_time": 8464.599,
"text": " You can be a first semester student and you can publish in computer science because the criteria of validity are not human criteria. The stuff either works or it doesn't. Your proof either pans out or it doesn't. Whereas the criteria in philosophy are to a much larger degree social criteria. So the more your peers influence the outcome of the review and the more your peers can deviate from the actual mission of your subject in the social dynamics, the more haphazard your field becomes."
},
{
"end_time": 8519.377,
"index": 348,
"start_time": 8491.869,
"text": " And so we noticed, for instance, in psychology, we had this big replication crisis, and the replication crisis in psychology was something that was anticipated by a number of psychologists for many, many years that pointed out this curious fact that psychology seems to be the only science where you make a prediction at the beginning of your paper and it always comes true in the end. Enormous predictive power. And also pointed at all the ways in which p-hacking was accepted and legal and how poorly the statistical tools were understood."
},
{
"end_time": 8539.411,
"index": 349,
"start_time": 8519.889,
"text": " And then we have this replication crisis and 15,000 studies get invalidated more or less or no longer reliable. And somebody pointed this out in a beautiful text where they said, essentially what's happening here is that we have an airplane crash and you hear that 15,000 of your loved ones have died."
},
{
"end_time": 8569.206,
"index": 350,
"start_time": 8540.06,
"text": " And nobody even goes to the trouble to ID them, because nobody cares, because nothing is changing as a result of these invalidated studies, right? What kind of the building has just toppled? Nobody cares. There's not actually a building. There's just people talking. And when this happens, we have to be brutally honest, I think, as a field. Also, I hear very often that AI has been inspired by newer science and learned so much from it. But when I look at the actual algorithms, the last big influence was heavier learning."
},
{
"end_time": 8599.48,
"index": 351,
"start_time": 8569.872,
"text": " And the other stuff is just people talking, taking inspiration, taking insights and so on. But it's not actually there is a lot of stuff that you can take out of the formalisms of people who studied the brain and directly translate it. I think that even what Carl is doing is much more results of information theory and physics that is congruent with information theory because it's thinking about similar phenomena using similar mathematical tools and then expresses it with more Greek letters than the computer scientists used to do."
},
{
"end_time": 8627.654,
"index": 352,
"start_time": 8599.48,
"text": " But there is a big overlap in this. And so I think the separation between intellectual traditions and fields and disciplines is something that we should probably overcome. We should also probably in an age of AI, rethink the way in which we publish and think, right? Is the paper actually the contribution that we want to make in the future in the time that you can ask your LLM to generate the paper? It's maybe it's the building block, the knowledge item, the argument that is going to be the major contribution"
},
{
"end_time": 8648.626,
"index": 353,
"start_time": 8628.012,
"text": " that the scientists or the team has to make the experiment. And then you have systems that automatically synthesize this into answers to the questions that you have and you want to do something in a particular kind of context. But this will completely change the way in which we evaluate value in the scientific institutions at the moment. And nobody knows what this is going to look like."
},
{
"end_time": 8672.892,
"index": 354,
"start_time": 8649.104,
"text": " Imagine we use an LLM to read a scientific paper and we parse out all the sources of the scientific paper from the paper and what the sources are meant to argue for. And then we automatically read all the sources and check whether they actually say that what the paper is claiming the sources say. And we parse through the entire trees of discipline in this way until we get to first principles. What are we going to find? Which parts of science will hold up?"
},
{
"end_time": 8693.387,
"index": 355,
"start_time": 8673.814,
"text": " I think that we might be at the doorstep of a choice between a scientific revolution in which science becomes radically honest and changes the way it works or in which it reveals itself as an employment program, as fake jobs for people who couldn't find a job in the real economy and basically get away because their peers let them get away with it."
},
{
"end_time": 8721.459,
"index": 356,
"start_time": 8693.848,
"text": " And I try to be as pointedly as possible and as bleak as possible. So science, given its incentives that it's working under and the institutional route that has set in after decades of postmodernism, it's surprisingly good stuff. There's so many good scientists in all fields that I know. But I also noticed that many of the disciplines don't seem to be making a lot of progress for the questions that we have. And many fields seem to be stuck."
},
{
"end_time": 8745.452,
"index": 357,
"start_time": 8721.817,
"text": " This doesn't seem to be just because all the low-hanging fruits are wrapped, but I think it's also because the way in which scientific institutions' works have changed. The notion of peer review probably didn't exist very much before the 1970s. This idea that you get truth by looking at a peer-reviewed study rather than asking a person who is able to read and write such studies. That is new. That is something that didn't exist for Einstein."
},
{
"end_time": 8772.381,
"index": 358,
"start_time": 8746.817,
"text": " So I don't know if this means that Einstein was an unscientific mind that was only successful because he was working at the beginning of a discipline, or it was because he was thinking at a completely different paradigm. But no matter what, I think that AI is going to have the potential to change the paradigm massively. And I don't know which way, but I can't wait. So now that we're talking about computer scientists,"
},
{
"end_time": 8797.978,
"index": 359,
"start_time": 8773.029,
"text": " What do you make of the debacle at OpenAI? Both Carl and Joscha, directed to you, Joscha first. There's relatively little I can say because I don't actually know what the reason was for the decision of the board to fire the CEO. Firing the CEO is one of the very few moves beyond providing advice than the board can make."
},
{
"end_time": 8825.469,
"index": 360,
"start_time": 8798.37,
"text": " I thought if the board makes such a decision in a company in which many of the core employees have been hired by the CEO and have been working very closely and happily with the CEO, they will need to have a very solid case. And there needs to be a lot of deliberation among core engineers and players in the company before such a decision is being made. Apparently that has not been the case. I have difficulty to understand why people behaved in the way in which they did."
},
{
"end_time": 8854.019,
"index": 361,
"start_time": 8826.476,
"text": " The outcome is that OpenAI is more unified than ever. It's basically 95% agreement about employees that they are going to leave the company if it doesn't reinstate the CEO. It's almost unheard of. This is like an Eastern European communist dictatorship is fake elections, but it was not fake. It was basically people getting together overnight and getting signatures."
},
{
"end_time": 8883.063,
"index": 362,
"start_time": 8854.343,
"text": " for a decision that gravely impacts their professional careers. Many of them are on visa that depend on continuous employment within the company, so they enter actual risks for a time. And I also suspect that a lot of the discussions that happened were bluffs, right, when the board said, yes, they want to reinstate them, but then Waffle then came out with Emmett Scheer, who is a pretty good person, but it's not clear why the Twitch CEO would be the right person to lead OpenAI suddenly."
},
{
"end_time": 8907.108,
"index": 363,
"start_time": 8883.353,
"text": " So I don't even know whether the decision was made because there were personal disagreements about communication styles or whether it was about the direction of the company where members of the board felt that AI is going to be developed too quickly and should be slowed down significantly. And the strategy of Sam Altman to run CHED GPT at a loss"
},
{
"end_time": 8937.005,
"index": 364,
"start_time": 8907.534,
"text": " and making up for this by speeding up the development and getting more capital in and thereby basically creating an AGI or bus strategy for the company might not be the right strategy. Also, the board members don't hold equity in the company. So this is the situation where the outcome of their decision is somewhat divorced from their own material incentives and it is more aligned with their political or ideal ideals that they might have or the goals that they have. And again,"
},
{
"end_time": 8965.094,
"index": 365,
"start_time": 8937.534,
"text": " Not all of them are hardcore AI researchers. Some of them are. I don't really know what the particular discussions have been in there. And of course, I have more intimate speculations at some discussions with people at OpenAI, but I cannot disclose the speculations, of course. And so at the moment, I can only summarize in some sense what's publicly known and what you can read on Twitter. It's super exciting. It has kept us all awake for a few days. It's a fascinating drama."
},
{
"end_time": 8995.06,
"index": 366,
"start_time": 8965.845,
"text": " And I'm somewhat frustrated by people say, oh my God, this is destroyed trust in open AI. If the decisions can be so erratic because open air should be like a bureaucracy that is not moving in a hundred years. No, this is part of something that is super dynamic and is changing all the time. I think that what the board should probably have seen is that the best possible outcomes that I could have achieved is that open AI is going to split that."
},
{
"end_time": 9024.633,
"index": 367,
"start_time": 8995.589,
"text": " the best possible in the sense of the board trying to fire Sam Altman to change the course of the company. They would have created one of the largest competitors to OpenAI. And so basically an anti-anthropic on the other side of OpenAI that is focusing more on accelerating AI research. It would have been clear that many of the core team members would join it and it would destroy a lot of the equity that OpenAI currently possesses. And it would take away large portions of OpenAI's largest customers, Microsoft."
},
{
"end_time": 9052.892,
"index": 368,
"start_time": 9025.981,
"text": " These are some observations. Sam is back now. Yes. It was clear that it would happen. This move by Satya Nadella to say he works now for Microsoft happened not after negotiating a new organization for a month. It happened in an afternoon after it was announced that the board now has another candidate that they secretly got talked into taking on this role."
},
{
"end_time": 9073.933,
"index": 369,
"start_time": 9053.268,
"text": " Microsoft basically set up as a threat. Okay, they're all going to come to us and every open AI person who wants cannot join Microsoft in a dedicated autonomous unit with details that are yet to be announced, but they're not going to be materially or worse off or research wise worse off. So this is a backstop that Microsoft had to implement to prevent its stock from tumbling."
},
{
"end_time": 9098.643,
"index": 370,
"start_time": 9075.06,
"text": " Monday morning. So Microsoft moved very fast on Sunday and decided we are going to make sure that we are not going to create a situation that is worse for us than it was before. And this creates enormous pressure on OpenAI to basically decide either we are going to be alone without most of the core employees and without our business model. But having succeeded in what the board wants,"
},
{
"end_time": 9127.91,
"index": 371,
"start_time": 9099.138,
"text": " And Carl, what do you make of it, the whole fiasco?"
},
{
"end_time": 9145.145,
"index": 372,
"start_time": 9128.183,
"text": " I was listening with fascination. I think you have more than enough material to keep your viewers engaged. Is OpenAI going to be ingested by Microsoft or not then? Do you think OpenAI is going to survive by itself?"
},
{
"end_time": 9174.189,
"index": 373,
"start_time": 9146.049,
"text": " Some people are joking that OpenAI's goals is to make Google obsolete, to replace search by intelligence, and Google is too slow to deliver a product to deal with this impending competition. OpenAI has rapidly growing in the last few months, has hired a lot of people who are focusing on product and customer relationships. The core research team has been growing much more conservatively. And I think that"
},
{
"end_time": 9203.251,
"index": 374,
"start_time": 9174.753,
"text": " Microsoft was a natural partner for OpenAI in this regard because Microsoft is able to make large investments and yet is possibly not as agile as Google. The risk that if OpenAI would partner with Google as a main customer that Google at some point would just walk away with the core technology and some of the core researchers might be larger than this Microsoft, but they can only speculate there. So the last question for this podcast is how is it that you all prevent"
},
{
"end_time": 9233.49,
"index": 375,
"start_time": 9203.882,
"text": " An existential crisis from occurring with all this talk of the self as an illusion or our beliefs which are so associated with our conception of ourselves. Mutable identities and competing, contradictory theories of terrifying reality being entertained. Well, Carl. This Marshawn beast mode Lynch prize pick is making sport season even more fun on prize picks, whether"
},
{
"end_time": 9238.353,
"index": 376,
"start_time": 9233.882,
"text": " A football fan, a basketball fan, it always feel good to be right."
},
{
"end_time": 9265.964,
"index": 377,
"start_time": 9238.558,
"text": " Right now, new users get $50 instantly in lineups when you play your first $5. The app is simple to use. Pick two or more players. Pick more or less on their stat projections. Anything from touchdown to threes. And if you write, you can win big. Mix and match players from any sport on PrizePix, America's number one daily fantasy sports app. PrizePix is available in 40 plus states, including California, Texas,"
},
{
"end_time": 9294.258,
"index": 378,
"start_time": 9266.22,
"text": " I'm just trying to get underneath the question."
},
{
"end_time": 9310.282,
"index": 379,
"start_time": 9294.94,
"text": " These the kind of illusions i think we're talking about are the stuff of the lived world and the experienced world and they are not"
},
{
"end_time": 9336.34,
"index": 380,
"start_time": 9310.93,
"text": " weak or facile or facsimiles of reality. These are the fantastic objects, belief structures that constitute reality. So literally, as I'm sure we've said before, the brain as a purveyor of these fantasies, these illusions is fantastic, literally, because it has the capacity to entertain these fantasies."
},
{
"end_time": 9358.183,
"index": 381,
"start_time": 9336.698,
"text": " So I don't think there should be any worry about somehow not being accountable to reality. These are fantastic objects that we have created, co-created, you could argue given some of our conversations, that constitute our reality."
},
{
"end_time": 9388.029,
"index": 382,
"start_time": 9359.701,
"text": " I think that existential crisis is a good thing. It basically means that you are getting at a point where you have a transition in front of you, where you basically realize that the current model is not working anymore and you need a new one. And the crisis, usually an existential crisis doesn't necessarily result in death. It typically results in transformation into something that is more sustainable because it understands itself and its relationship to reality better."
},
{
"end_time": 9418.131,
"index": 383,
"start_time": 9388.524,
"text": " The fact that we have existential questions and that we want to have answers for them is a good thing. When I was young, I thought I don't want to understand how music actually works because it would remove the magic. But the more I understood how music works, the more appreciative I became of deeper levels of magic. And I think the same is true for our own minds. It's not like when we understand how it works that it loses its magic. It just removes the stupidity of superstition and gives us something that"
},
{
"end_time": 9432.585,
"index": 384,
"start_time": 9418.473,
"text": " Thank you, Yoshi. Thank you, Carl. There's a litany of points for myself, for the audience, for all of us to chew on."
},
{
"end_time": 9458.387,
"index": 385,
"start_time": 9432.944,
"text": " Over the course of the next few days, maybe even weeks, thank you. Thank you, Kurt, for bringing us together. Karl, I really enjoyed this conversation with you. It was brilliant. I like that you think on your feet that we have this very deep interaction. I found it interesting that we agree on almost everything. We might sometimes use different terminology but we seem to be looking at the same thing from pretty much the same perspective."
},
{
"end_time": 9481.459,
"index": 386,
"start_time": 9458.814,
"text": " And I also really enjoyed it was it was a very, very engaging conversation. And I love the way that you're not frightened to upset people and tell things that they are looking for a job in academia. Good. I still don't have your balls. Well done. Have a wonderful rest of the day. Thank you. All right, take care. Thanks very much."
},
{
"end_time": 9490.299,
"index": 387,
"start_time": 9482.022,
"text": " By the way, if you would like me to expand on this thesis of multiple overlapping consciousnesses that I had from a few years ago, let me know and I can look through my old notes."
},
{
"end_time": 9513.575,
"index": 388,
"start_time": 9490.657,
"text": " Alright, that's a heavy note to end on. You should know, Josje has been on this podcast several times, one solo, another with Ben Gortzow, another with John Verweke, another with Michael Levin, and one more with Donald Hoffman. Whereas Karl Friston has also been on several times, twice solo, another between Karl Friston and Michael Levin, and another with Karl and Anna Lemke. That one's coming up shortly."
},
{
"end_time": 9542.944,
"index": 389,
"start_time": 9513.78,
"text": " The links to every podcast mentioned will be in the description as well as the links to any of the articles or books mentioned as usual in every single theories of everything podcast are in the description. We take meticulous timestamps and we take meticulous notes. If you'd like to donate because this channel has had a difficult time monetizing with sponsors and sponsors are the main bread and butter of YouTube channels, then there are three options. There's Patreon, which is a monthly subscription. It's patreon.com slash Kurt Jaimungal. Again, links are in the description."
},
{
"end_time": 9570.93,
"index": 390,
"start_time": 9542.944,
"text": " There's also PayPal for one time sums. If you like, it's also a place where you can donate monthly. There's a custom way of doing so. And the amount that goes to the creator, aka me in this case, is greater on PayPal than on Patreon because PayPal takes less of a cut. There's also cryptocurrency. If you're more familiar with that and the links to all of these are in the description. I'll say them aloud in case you're away from the screen. It's tinyurl.com slash lowercase. All of this is lowercase."
},
{
"end_time": 9573.831,
"index": 391,
"start_time": 9571.169,
"text": " PAYPAL"
},
{
"end_time": 9601.988,
"index": 392,
"start_time": 9574.138,
"text": " Thank you. Thank you for your support. It helps Toll continue to run. It helps pay for the editor who's doing this right now. I and my wife are extremely grateful for your support. We wouldn't be able to do this without you. Thank you."
},
{
"end_time": 9628.78,
"index": 393,
"start_time": 9602.398,
"text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories and build as a community our own toes. Links to both are in the description."
},
{
"end_time": 9644.36,
"index": 394,
"start_time": 9628.78,
"text": " Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well."
},
{
"end_time": 9665.23,
"index": 395,
"start_time": 9644.616,
"text": " Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments, hey, toll listeners also gain from replaying. So how about instead re-listening on those platforms?"
},
{
"end_time": 9694.531,
"index": 396,
"start_time": 9665.23,
"text": " iTunes, Spotify, Google Podcasts, whichever podcast catcher you use. If you'd like to support more conversations like this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever you like. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. For instance, this episode was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough."
}
]
}
No transcript available.