Audio Player
Starting at:
David Chalmers: Are Large Language Models Conscious?
November 9, 2023
•
1:04:46
•
undefined
Audio:
Download MP3
⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.
Transcript
Enhanced with Timestamps
152 sentences
9,011 words
Method: api-polled
Transcription time: 63m 23s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
Listen up, people. Over the years, I've learned how important hydration is in my workouts and everyday life. It's the key to helping my body move, recover, and just have a good time when I'm exercising and staying active. Things go even better when I'm well hydrated, before I even start moving.
Noon Hydration doesn't want you to wait to hydrate. They want you to start hydrated. Noon Sport Hydration tablets hydrate better than water alone. Just drop them in your water, dissolve, and enjoy. They contain five essential electrolytes and come in crisp and refreshing flavors like strawberry lemonade, lemon lime, and many more. They also have non-GMO vegan and gluten-free ingredients and only one gram of sugar,
Noon hydrates you so you can do more, go further, and recover faster. And that means you can have a heck of a lot more fun. Since hydrated humans have the most fun, head over to ShopNow on NoonLife.com and pick some up today so you're ready for anything. Because anything can happen after noon.
Will future language models and their extensions be conscious? In developing, conscious AI may lead to harms to human beings, may also lead to harms to these AI systems themselves. So I think there needs to be a lot of reflection about these ethical questions.
This is a presentation by David Chalmers, as well as questions posed to him, as well as me and Susan Schneider, by you, the in-person audience earlier this year at MindFest Florida, spearheaded by Susan Schneider. All of the talks on that AI and Consciousness conference are in the description. I'll list some of them aloud. For instance, there's one by Anand Vaidya. This was his presentation on moving beyond non-dualism,
and integrating Indian frameworks into the AI discussion. Ben Gortzel also appeared, giving his talk on AGI timelines. You should also know that Ben Gortzel recently came on with Josche Bock for Theolocution, and that's in the description as well. There's Claudia Peisos, there's Garrett Mint, and Carlos Montemayor on Petri Minds and what it takes to build non-human consciousness. There's also a Stephen Wolfram talk on AI, chat GPT, and so on. We're also going to be releasing the second Wolfram talk, as it's considered by many of the people there, his best.
Why? Because it was aimed at high schoolers, even the lay public. And Stephen does a great job at explaining the connection between AI and physics with his hypergraph-slash-ruliad approach. That one's taking a bit of time, unfortunately, because the audio became messed up, and so what we have to do is reconstruct it using AI.
My name is Kurt Jaimungal and this is a podcast called Theories of Everything where I use my background in math and physics to investigate theories of everything such as loop quantum gravity and string theory and even wolframs. But as well we touch on consciousness and what role does consciousness have in fundamental law? Is it an emergent property? Is it something that's more? If you like those subjects as well as what you saw on screen then feel free to subscribe. We've been having issues monetizing the channel with sponsorship
Okay, so let's go ahead and get started.
Okay, well, thank you so much, Susan, for this amazing MindFest 2023. And thanks so much to Stephen and Misha and Simone and everybody else who's been involved in putting together this conference. It's been quite a memorable event already. Yeah, so Susan just asked me to give a brief overview of this
Paper I wrote on could a large language model be conscious? I gave this at the NeurIPS conference, the big machine learning conference in late November and subsequently wrote it up. Maybe some people will have gotten as far as having read the first few pages, but I thought I'd just summarize some of the ideas. I actually started out when I was a grad student like 30 plus years ago working on neural networks.
Back in the early 90s in Doug Hofstadter's AI lab did a bunch of stuff related to models of language like active-pressive transformations, models of the evolution of learning, and then I kind of got distracted by thinking about consciousness for a little while, but so it's very cool to
have this intersection of issues about neural networks and machine learning and consciousness become so much to the fore in the last year or so.
One high point of interest was this big brouhaha last June, where Blake Lemoine, an engineer at Google, said he detected sentience in one of Google's AI systems, Lambda 2. Maybe it had a soul, maybe it had consciousness, and this led to a whole lot of skepticism, a whole lot of discussion. Google,
Themselves said our team including ethicists and technologists has reviewed Blake's concerns And we've informed him that the evidence doesn't support his claims There's no evidence that lambda was sentient lots of evidence against it. This was already very interesting. Okay evidence good What is now that they didn't actually lay out the evidence? I'm just curious what it what was the evidence?
In favor of his claims and what was the evidence against it and really this talk is in a way was in a way just a way to try and make sense of them of Those questions evidence for evidence against asking questions like our current language models conscious Could future language models
Be conscious. Also, could future extensions of language models be conscious, as when we combine language models, say, with perceptual mechanisms or with action mechanisms in a body or with database lookup and so on, and what challenges need to be overcome on the path to conscious machine learning systems at each point, trying to examine the reasons
So here I'll just briefly go over clarifying some of the issues, looking briefly some of the reasons in favor, looking at some of the reasons against, and draw conclusions about where we are now and about possible roadmaps between where we are now and machine consciousness. So as I understand the term consciousness, we've already had a lot of discussion of this at the conference, especially
Especially yesterday morning, so the ground has been prepared These words like consciousness and sentience get used in many different ways at least as I understand them
I basically mean subjective experience. A being is conscious if there's something it's like to be that being, that is, if it has subjective experience. Yesterday, I think it was Anand talking about Nagels, what is it like to be a bat? The question now is, what is it like, is there anything it's like to be a large language model or future extensions of large language models?
So, consciousness includes many different kinds, many different varieties. You can divide it into types in various ways. I find it useful to divide it into a few categories. Sensory experience, like seeing and hearing. Affective experience, the experience of value, things, you know, feeling good or bad, like pain or pleasure. Feeling emotions, like happiness and sadness. Cognitive experiences,
like the experience of thinking, agentive experience, experience of acting and deciding, like an agent. And all of these actually can be combined with self-consciousness, awareness of oneself. Although I take self-consciousness just to be one kind of consciousness. Awareness, consciousness of the world may be present even in simple systems that don't have self-consciousness. And some of these
components may be more apt to be had by language models than others. I think it's very important to distinguish consciousness from intelligence. Intelligence I understand roughly in terms of behavior and reasoning that guides behavior. Intelligence is roughly, you know, being able to do means and reasoning in order to be able to achieve multiple ends in many different environments. Ultimately a matter of behavior.
Consciousness comes apart from that. I think, you know, consciousness may be present in systems which are fairly low on the on the intelligence scale. Certainly doesn't require anything like human level intelligence. Good chance that worms are conscious or fish are conscious. Well, sure. So the issue of consciousness isn't the same as the issue of human level artificial general intelligence. Consciousness is subjective.
You might ask why consciousness matters. I mean, it would be nice to say, well, one reason why consciousness matters is consciousness is going to give you all these amazing capacities. Conscious systems will be able to do things that other systems can't do. That may be true, but actually right now we understand the function of consciousness sufficiently badly. There's nothing I can promise you that conscious systems can do.
That unconscious systems can't do but one reason one fairly widely acknowledged reason why it matters is consciousness matters for morality and Moral status if I say an animal like say a fish is conscious if it can suffer that means in principle it matters how we treat the fish if it's not conscious
If it can't suffer and so on, then its consciousness doesn't matter. And that, again, we were talking about yesterday. So if an AI system is conscious, suddenly it enters our moral calculations. We have to think about, you know, boy, if the training we're inflicting on a machine learning system actually inflicts suffering, a possibility some people have taken seriously, we need to worry about whether we're actually creating a moral catastrophe.
by training these systems. At the very least, we ought to be thinking about methods of dealing with these AI systems that minimize suffering and other negative experiences. Furthermore, even if conscious AI doesn't suffice for human level AI, maybe it'll just be, say, fish level or mouse level AI, it'll be one very important step on the road to human level AI, one that brings, you know,
One that would be if we could be confident of consciousness in an AI system, that would be a very significant step. Okay, so reasons for and again, so just go over the summarize a few reasons in favor of consciousness in current language models. And I put this in the form of asking for a certain kind of request for reasons.
I ask a proponent of language models being conscious to articulate a feature X such that language models have X and also such that if a system has X it's probably conscious and ideally give good reasons for both of those claims. I'm not sure there is such an X but you know a few have been at least articulated. For Blake Lemoine it seemed actually what moved him the most
was self-report, the fact that Lambda 2 said, yes, I am a sentient system. What's your sentience and consciousness like? It would explain this. I'm sometimes happy and sad. Interestingly, in humans, verbal report is actually
Typically our best guide to consciousness, at least in the case of adults. Claudia was talking about, you know, the cases like infants and animals where we lack verbal reports. So you might think we're in a more difficult position. Well, actually, in the case of language models, we have verbal reports. They would think, great. They say they're conscious. We'll use that as evidence. Unfortunately, as Susan and Ed and others have pointed out,
You know this evidence is not terribly strong in a context where these language models have been trained on a giant corpus of Text from human beings who are of course conscious beings and who talk about being conscious So it's fairly plausible that a language model has just learned to to repeat those claims at least so in this special case
You know Susan and Ed's artificial consciousness test is super interesting which basically look at your reactions to thought experiments about consciousness But in this special case where they've been trained on so much Text already. I think it carries less weight. Maybe I'll skip over Seems consciousness which is relevant this conversation, you know a lot of people have been very very impressed by the conversational ability of these recent
These recent systems I guess chat GPT was fine-tuned for conversation unlike Unlike the basic GPT-3 model and now we have GPT-4 Which also appears to have been fine-tuned for conversation and you know the conversational ability is one of the classic criteria for for thought in AI systems articulated by Turing back in his 1950 paper on the imitation game. He basically thought if a machine behaved in
in a way sufficiently indistinguishable from a human, then we might as well say it can think or that it's intelligent. So these language models, I think they've not yet passed the Turing test. Anyone who wants to probe sufficiently well can find glitches and mistakes and idiosyncrasies in this system. That said, through interacting, I got access to GPT-4, through interacting with that just the last couple of days,
Kind of feels a lot like, I was saying for like a GPT-3, it felt like talking to a smart eight-year-old. I think now GPT-4, it's at least a teenager. Maybe it's even, it's approaching sophisticated, you know, adult in a lot of its, a lot of its conversations. Yes, it makes mistakes. You know, one Turing test you might do is ask your questions like, you know, what is the fifth word of this sentence? And apparently it always gets this kind of thing wrong. It will say something like fifth.
No, actually the fifth word of the sentence was word. What was the fifth letter of the third word of this sentence? It'll get that wrong. Okay, a lot of people will get that wrong too, so that's not a guarantee that it's failing the Turing test. We're getting close. I mean, that said, I think conversational ability here is relevant because
Not so much in his own right, but because it's a sign of general intelligence. If you're able to talk about a whole lot of different domains and play chess and code and talk about social situations and scientific situations, that suggests general intelligence. So a lot of people have thought there are at least correlations between domain general abilities and consciousness. Abilities you can only use, information you can only use for certain special purposes.
Are not necessarily conscious information available for all kinds of different activities in different domains are often held to go along with consciousness. So that at least I think gives us some basic reasons. I take probably this one as the most serious reason for taking seriously the possibility of consciousness in these systems that said
I don't think, just looking at the positive evidence, I don't think any of this provides remotely conclusive evidence that language models are conscious, but I do think the impressive general abilities give at least some limited initial support for taking the hypothesis seriously, at least taking it seriously enough to now to look at the reasons against, to see, you know, what are these? Everyone says, okay, look, there's all kinds of evidence, these systems are not conscious. Well, okay, what are those reasons? I think they're worth
examining. So the flip side of this is to look at the reasons why language models are not conscious and the challenge here for opponents is articulate a feature x so that language models lack x and if a system lacks x it's probably not sentient and then again try and give good reasons for one and two and here we have six different reasons that I consider
in the paper. The first one, which I just considered very briefly, is the idea that consciousness requires biology, carbon-based biology. Therefore, a silicon system is not biological and lacks consciousness. That would rule out pretty much all AI consciousness, if correct, at least all silicon-based AI
Consciousness this one is a really familiar issue and philosophy goes back to issues of you know soul and the the Chinese room and All kinds of yeah, it's a very very well-trodden debates I I'm really here more interested in issues more specific to to a language model. So I'll pass over this one quickly a More maybe
Closer to the bone for for language models specifically is the issue of having senses and embodiment a standard language model has nothing like a like a human sense, you know vision and hearing and So on no sensory processing so they can't sense suggests. They have no sensory consciousness Furthermore, they lack a body If they lack a body it looks like they can't act if so
Maybe no agentive consciousness. Some people have gone on to argue that because of their lack of senses, they may not have any kind of genuine meaning or cognition at all. We need senses for grounding the meaning of our thoughts. There's a huge amount to say about this. In fact, I recently had to give a talk at the American Philosophical Association and just talk completely about this issue. But one way just to briefly
cut this issue off is to look at all of the developing extensions of language models that actually have some forms of sensory processing. For example, GPT-4 is already designed to process images. It's what's called a vision language model. And I gather, although this is not yet fully public, that it can process sound files and so on. So it's a multimodal model. This is DeepMind's Flamingo, which is another vision language model.
You might say, what about the body? But people are also combining these language models with bodies now. Here's Google Seikan, where it actually, a language model controls a physical robot. Here is a, this is one of DeepMind's model, MIA, which works, controls a virtual body in a virtual world. And that's become a very
A very big thing to that connects to the issues I'll be talking about later this afternoon about virtual reality and virtual worlds, but we're already moving to a point where I think it's going to be quite standard quite soon for to have extended language models with sensors and embodiment, which will tend to overcome the objection from lack of sensors and embodiment. Another issue is the issue of world models. There's this famous criticism by
Timnit Gebru, Emily Bender and others that language models are sarcastic parrots. They just minimize text prediction error. They don't have genuine understanding, meaning, world models, self models. There's a lot to this. Again, I take world models to be the crucial part of this question because world models are plausibly something which is required for consciousness.
Hi, I'm here to pick up my son Milo. There's no Milo here. Who picked up my son from school? Streaming only on Peacock. I'm gonna need the name of everyone that could have a connection. You don't understand. It was just the five of us. So this was all planned? What are you gonna do? I will do whatever it takes to get my son back. I honestly didn't see this coming. These nice people killing each other. All Her Fault, a new series streaming now only on Peacock.
There's actually a lot of interesting work in interpretability, recently of actually trying to detect world models in systems. Here's someone trying to detect a world model in GBT-3 playing Othello, and they actually find some interesting models of the board. A slightly more technical issue is the question that these language models are feed-forward systems that lack memory-like internal state of the kinds you find in recurrent.
Networks many theories of consciousness say that recurrent processing and a certain form of short-term memory is required for consciousness Here's a standard LSTM a standard recurrent network. Whereas transformers are largely largely feed forward they've got some quasi recurrence from this recirculation of inputs and outputs you know that said there's a I take it there's a
We don't know the architecture of GPT-4. There are rumors that it involves more recurrence than GPT-3. So this also looks like a temporary limitation. Fifth is the question of a global workspace. Perhaps the leading theory, the leading scientific theory of consciousness, Claudia talked about it yesterday, is that consciousness involves this global workspace for connecting many modules in the brain. It's not obvious that language models have these.
On the other hand,
If you ask me, the feature that these many current language models lack that seems most crucial is some kind of unified agency. It looks like these language models can take on many different personas like actors or chameleons. Yeah, you can get lambda to say it's sentient. You can just as easily get it to say it's not sentient. You can get it to simulate a philosopher or an engineer or a politician.
They seem to lack stable goals and beliefs of their own, which suggests a certain lack of a certain unity to which many people think consciousness requires more unity. But again, that said, you know, there's a lot of things to say about this, but there is a whole emerging literature on like agent modeling or person modeling where you develop these systems. At the very least, it's not very different. It's very easy to fine tune these systems.
to act more like a single individual. But there are projects of trying to train some of these systems from the ground up to model a certain individual. And that's certainly coming. Perhaps some reason to think those systems will be more unified. Okay, so then look at those six reasons against
That's okay. Some of them are actually reasonably strong and plausible requirements for consciousness. It's reasonably plausible that current language models lack them. I think especially the last three I view as quite strong. That said, all of those reasons look quite temporary. We're already developing models of global workspace. Recurrent language models exist and are going to be developed further. More unified models.
There's a clear research program there, so it's interesting to me that the strongest reasons all look fairly temporary. So, just to sum up my analysis, are current language models conscious? Well, no conclusive reasons against this, despite what Google says, but still there are some strong reasons, reasonably strong reasons, to deny that they're conscious corresponding to those requirements,
I think it would not be unreasonable to have fairly low confidence that current language models are conscious, but looking ahead 10 years, it was 2032 when I gave the talk, I guess now 2033, will future.
language models and their extensions be conscious. I think there's good reason to think they may well have overcome the most significant and obstacle up and obvious obstacles to conscious. So I think it would be reasonable at least to have somewhat higher credence when asked to, you know, you shouldn't be too serious with numbers on these things. But when asked to put a number on this, I'd say I'm at least I think it's at least say a 20% chance that we'll have conscious AI by 2032. And if so, that'll be quite
Significant so conclusion questions about AI consciousness are not going away within 10 years even if we don't have human level AGI we'll have systems that are serious candidates for conscious for consciousness and meeting the challenges to consciousness in language models could actually yield a potential roadmap to conscious AI and actually here I laid out in the longer version of the paper a something of a roadmap I mean there are some
Philosophical challenges to be overcome, better evidence, better theory, better interpretability. The ethics is all important, but also some technical challenges. Rich models in virtual worlds, with robust world models, with genuine memory and recurrence, global workspace, unified person models, and so on. But you just said we had, we actually overcame those challenges. All of these challenges look eminently doable, if not done.
Within the next decade or so just say by within a decade. We've got some system doesn't have to be human level a GI. Let's say mouse level capacities showing all of these features then question would that actually be enough for consciousness?
Many people will say no, but then the question is, well, if those systems are not conscious, what's missing? I think at that point, we have to take this very seriously. And by the way, we do need to think very, very seriously about the ethical question about whether it's actually okay for us to pursue. I'm not necessarily recommending this research program. It's a very serious ethical question. If in developing conscious AI,
May lead to harm to human beings, may also lead to harm to these AI systems themselves. So I think there needs to be a lot of reflection about these ethical questions and philosophers as well as AI researchers in the broader community are going to have to think about that very hard. So thanks.
Unfortunately, our AI voice enhancer couldn't bring clarity to much of this Q&A section, so I'll interject to summarize the questions. I guess I was just wondering about the Unified Agency. That's the worst thing that was missing. And so one reason you might not think of that is if you think of like Netlock's AMP Bubbles machine, which had a Unified Agency, right? So it was just modeled after this AMP Bubbles, but there was no reason to think I was conscious.
Okay, so this aren't bubbles machine, also sometimes known as Blockhead, after my colleague, Ned Block, who invented it, basically stores every possible conversation that one might have with, I guess, one's aren't bubbles. And just, you know, once it gets to like step 40 of the conversation, it looks over the entire history of the conversation, looks up the right answer, and, and gives it I mean, totally impossible to create a system like this, it would require a combinatorial
Explosion of memory but Ned used this to argue that systems a system could pass the Turing test the system could pass the Turing test but it quite clearly would not be conscious or intelligent now Jake is suggesting that
That a system like this might nevertheless be unified or unified in the very weak sense of being based on a certain individual. But I think if you actually look at its processing, it looks extremely disunified. To me, it's actually massively fragmented. It's got a separate mechanism for every single conversation. So I don't see the kind of mechanisms of integration there that philosophers have standardly required for unity of consciousness. It does bring up many interesting questions about what kind of unity is required for consciousness. There's no easy answer.
The speaker asks, what does it take to determine whether an entity is conscious? Well, there's never any guarantee with consciousness. Philosophers have argued that we can at least imagine beings who are behaviorally, maybe even physically, just like a human being.
But who are not conscious. Even when I'm interacting with you, you may give every sign of consciousness. But at least the philosophical question arises, are you conscious or are you a philosophical zombie? A system that lacks consciousness entirely. With other people, we're usually prepared to extend the benefit of the doubt. You know, other people are enough like us biologically, evolutionarily, that, you know, when they behave consciously and say they're conscious, then we've got pretty good
if not totally conclusive reasons to think they are. Now, once it comes to an AI system, well, they're unlike, their behavior may be like that of a human being, but they may still be unlike us in various ways. The internal processing of a large language model is extremely different from that in a brain. It's not just carbon versus silicon. It's like the whole architecture is different and the behavior is different.
So the reasons are going to be weaker. At this point, I think this is why you actually need to start looking inside the system, going beyond behavior to think about what the processes are. Let's look at our leading current theories of consciousness, what they require for consciousness, global workspace, world model, perhaps recurrence, perhaps some kind of unity. If we can actually find all of that in addition to the behavior, I then give that very serious
Let's just say, look, it's still not conclusive proof that a system is conscious, but if you can do all that and then someone says it's not conscious, then at that point I think I can reasonably ask them, what do you take to be missing?
Thanks David. I think I want to go back to maybe your second slide. You had several points there. My question was more about affective mechanisms and cognition. So I just want to delve in and see your thoughts on affect and how it relates because, you know, of course, I think that's the missing piece of immunified agency because that agent has to be interested or involved by something. Like if you want to take an example in a lab scenario, say I've been
comes in every day wearing a blue shirt, and Sophia sees that, maybe she has a positive response to that, or the inverse of it, an embodiment of cognition, and maybe as well most by a certain color or something random in the environment, you know, so affect is what drives us, you know, what causes interest in us. The AI is not committed to saying, you know, unconscious or not conscious, it's the same to it, but what would be a driving factor, what would be affective mechanisms, what would they look like in the setting,
Yeah, it's a good question. Affective experience is obviously extremely central to human consciousness. I've actually argued that it's not absolutely central to consciousness in general. There could be a conscious being
with little or no affective experience. We already know human beings with wider and narrower affective range, but while getting into thought experiments, I quite like the thought experiments of the, I think we were talking the other night about the philosophical Vulcan, inspired by Spock from Star Trek, who was supposed to lack emotions. I mean, Spock was a terrible example because he was half human and
Often got emotional and they go through, even Vulcans go through, bonfire every few years. However, a philosophical Vulcan is a purer case. Conscious being sees, hears, thinks, reasons, but never has an affective state. I think that's perfectly coherent. Humans aren't like that, but I think it's, I think that's coherent. I think that I've argued a being like that would still have moral status. So affect, suffering is not the be all and end all for moral status. That said, affect is very,
crucial for us. What would drive a philosophical Vulcan to do what they do? Not their affective states, not feelings of happiness and frustration and so on. Rather, I think it would be more of like the kind of creature described by Immanuel Kant, who has these kind of colder, you know, goals and desires. It could still want to advance science and protect his family and so on, in the absence of
So I think it would be possible, even if we didn't have good mechanisms for affective experience in AI, I think we could still quite possibly design a conscious AI. That said, one very natural route to conscious AI is to try and build in.
Carla,
I wonder what you think about this thing that I find interesting, this asymmetry, that you were talking about, mouse-level consciousness. And we're all like, oh my god, if that happens, you need to hire warriors to do these things. I mean, what do you think about this asymmetry, this creature that most of us think are conscious, should fall within the moral protections
Yeah, I mean, I think my own view is that any conscious being at least deserves moral consideration. So I think absolutely mice deserve moral consideration. Now, exactly how much moral consideration, that's itself a huge issue in its own right. Probably not as much moral consideration as a human being. That is probably some.
I don't know, even a scientist running a lab where they put mice through all kinds of things, I think they at least give the mice some moral consideration. They probably try not to make the mice suffer totally gratuitously. Even that's a sign of some moral consideration. That said, they may be willing to put them through all kinds of suffering when it's scientifically useful, which means they're not giving them that much moral consideration. And I'm very prepared to believe we should give mice and fish and many animals much more moral consideration than we do exactly.
what the right level is, I have, I don't really know. But yeah, I think much of the same goes for AI, maybe initially we'll have AI systems with the kind of something like maybe, I don't think it's out of the question that conscious AIs could have something like the current AI systems could have something like the conscious experience of, you know, a worm or maybe even a fish or, or, or something like this, and thereby already deserve some
moral consideration. That said, it's not with AIs, unlike, say, mice. Well, I guess in flowers from Algernon, the mouse gets to be very smart very soon and suddenly reaches human level intelligence. Doesn't happen with real mice. But with AIs, you know, it's gonna be one day mice, one day fish, next day mice, next day primates, next day humans. And, you know, obviously, when the issue is really going to hit home is when we have AI systems as sophisticated
May I jump in and follow up on that?
That's such an interesting question because you could also envision a hybrid AI system such as an animat, if you will, with a biological component as well as a non-biological component or a neuromorphic component and a non-neuromorphic component. Maybe you get something like the consciousness of a mouse, but super intelligent.
I mean, I'm wondering if we should be more careful about assuming a correlation between level of consciousness and level of intelligence and what this issue does in the moral calculus of concern.
Yeah, it's a great question. Do you want to throw that one? I know you wanted to throw it up into the audience at some point. I'm happy to take on this one. I think consciousness and intelligence are, to some degree, dissociable. I think that's especially so for, say, sensory consciousness, affective consciousness, and so on. I think cognitive consciousness, I am inclined to think, has got some strong correlation with consciousness, relatively
Unintelligent systems, you know, worm and so on, probably doesn't have much in the way of cognitive consciousness. Humans have far more developed cognitive consciousness. And even in sensory consciousness and affective consciousness, we have rich sensory and affective states. That's largely, I think, due in significant part to their interaction with cognitive consciousness. And I'm inclined to think people say, you know, Bentham said what matters for animals is it can they talk? Can they reason? No, it's can they suffer?
I'm actually like maybe Bentham was a little bit too fast. It's like your reasoning and cognition is actually very important for for moral status. And that I think does at least correlate with with intelligence. But affect on the other hand, yeah, maybe a mouse can be suffering hugely. And that that suffering ought to get weight now considerations to some degree independent of intelligence. I think Kirk, from theories of everything, who is the
Co-MC is going to jump in now with a question. Is that right? Sure. Is it all right if Valerie answers one question? Because I know that she has one that's been burning. A burning question. Gosh. A burning question. It is Charlie Roebuck being hypnotized. Because when you hypnotize a person, you're going into the subconscious, getting in relation to bring it forward, to see what's for the designers. What's your view?
That is a great question. I have no idea about the answer, but maybe someone here does. Anyone? Anyone hypnotized in the AI system? There are people who have done simulations of the conscious and the unconscious. You know any AI system simulating the Freudian unconscious, Claudia? Someone should be doing this for sure. This reminds me though of
Issues involving testing machine consciousness and it reminds me of, for example, Ed Turner's, and I guess it was sort of my view too, on writing a test for machine consciousness that was probing to see if there was the felt quality of experience.
and actually think that cases of hypnosis, you know, if you could find that kind of phenomenon at the level of machines, it could very well be an interesting indication that something was going on. But it leads us to a more general issue that we wanted to raise with the audience, which is what methodological requirements are appropriate
for testing the machine and deciding whether a machine is conscious or not. And maybe I'll turn it over to Ed Churna for the first answer and then back there, then Gertzl for the second.
Razor blades are like diving boards. The longer the board, the more the wobble, the more the wobble, the more nicks, cuts, scrapes. A bad shave isn't a blade problem, it's an extension problem. Henson is a family-owned aerospace parts manufacturer that's made parts for the International Space Station and the Mars Rover.
Now they're bringing that precision engineering to your shaving experience. By using aerospace-grade CNC machines, Henson makes razors that extend less than the thickness of a human hair. The razor also has built-in channels that evacuates hair and cream, which make clogging virtually impossible. Henson Shaving wants to produce the best razors, not the best razor business, so that means no plastics, no subscriptions, no proprietary blades, and no planned obsolescence.
It's also extremely affordable. The Henson razor works with the standard dual edge blades that give you that old school shave with the benefits of this new school tech. It's time to say no to subscriptions and yes to a razor that'll last you a lifetime. Visit hensonshaving.com slash everything.
If you use that code, you'll get two years worth of blades for free. Just make sure to add them to the cart. Plus 100 free blades when you head to H E N S O N S H A V I N G dot com slash everything and use the code everything. Let me just quickly say the kind of meta idea behind the specific test that Susan and I published a few years ago.
is as follows. Since we can't directly detect felt experience, subjective experience, you might ask, what do entities that have self-consciousness learn from this experience? Do they get any information from that experience that isn't otherwise available to them? And if so, looking for that
testing to see if they have that information would be an indirect way as a proxy for the experience space. And I think we use this with people a great deal. And the example that Susan and I turned to was almost everyone understands very easily ideas like reincarnated ghosts, out-of-body experiences, life after death.
because from their experience they perceive themselves as existing as an entity that has experience as different from a physical object. So if you say to someone, you know, after you die you will be reincarnated in the body. That makes sense to people like that. If you say to them, after you die your soul will be reincarnated.
That sounds inappropriate and you have to explain a lot what you could possibly mean from that. And for a variety of human experiences, you know, the felt experience of things like a broken heart and a romantic relationship, a culture shock, an anxiety attack, synesthesia is a little more exotic.
If you're talking to someone, if you've had one of those experiences and you speak to someone who has not had them or who has also had them, you can tell the difference very quickly. They get it if they've already had the experience and you can go ahead and talk about what it's like to have an anxiety attack or whatever. If not, you have a lot of explaining to do to get them to understand what you're talking about. And so the sort of structure of the type of tests that Susan and I
proposed was that you isolate the machine from any information about what people say about their thought experience, and then try to get them to understand some of these concepts. Thank you, Ed. And now, staying grateful, I had a comment as well. Yeah, I've thought about this topic a fair bit, and then
Partial solution I've come up with, we can't do brain-to-brain or brain-to-machine interfacing. So I think, I mean, very broadly speaking, people talk about first-person experience, subjective feel of being yourself.
second-person experience, which is more like a boomerian, eye-ballast experience. It's directly perceiving the mind of another person. The third-person experience, which is sort of objectionish, like sharing experience in the physical realm. What's interesting, we think about the somewhat hard problem of consciousness this year, the contrast of first-person experience
which is your subjective, manually felt quota with science, which in essence is about how the people with minds commonly agree that a certain item of data is in the shared perception of whatever is in that community. Once we can sort of wire Wi-Fi, our brains together, let's say that I can wire my brain with the brain of this gentleman right here, it would be an amazing experience, right?
you can increase the bandwidth of that wire, then we would feel like we were controlling twins. It seems like this sort of terminology, which is probably not super, super far off, it feels like this gives it a different dimension of view. It's somewhat bypassing the higher probability of consciousness, although not actually solving it, because it brings into the domain of
I feel this has existed in a less advanced form in the history of Buddhist and various spiritual traditions.
Well, people are following common meditation protocols and psychedelic protocols, and they have the sense that they're coming seriously to the minds of other people there. Ben, that is fascinating. And you know, I've been telling my students about the craniopagus twin case, I don't know if you know about that, conjoined twins in Canada, who have a dalamic bridge. Wow.
It's a novel anatomical structure that we don't have it in nature, as far as I know, previous to this, or those documented. And of course, everybody wants to study them. The parents are very protective, however, but it's well documented. They don't refuse each other, even though they're conjoined, that when one eats peanut butter, she'll do it to drive her twin crazy.
I wanted to bring that over to Dave Chalmers and see what your reaction is to Ben's Oing.
Oh, you know, I love the idea of mind merging as a test for consciousness. I'm not totally convinced because of course you could you could mind merge with a zombie and from the first person perspective, it would still feel great. You could probably you could probably mind merge with GPT-4 and it'd be pretty sense. So you're right, Bob. This is Ben asks,
If you mind merged with the philosophical zombie, could it feel the same as mind merging with a fully conscious being? Kind of mind merging I'm having in mind, it's like, we're still, there's still two minds here, right? I mean, I'm experiencing, I'm still me experiencing you. So it's really, you know, you're, I'm a conscious being already, and this is having massive effects on my consciousness, you know, psychedelic drugs could have massive effects on my conscious consciousness without themselves being conscious.
without themselves being
So you have the idea of merging into become one common mind. I would still worry that a conscious being and an unconscious being could merge into one unified freaky conscious mind. Yeah, but now we need the criteria for which distinctive feelings are actually the feelings that track the other being. The other version I like is gradually turning yourself into that being. You know, I mean, the classic version is you replace your neurons by silicon chips.
So you gradually transform yourself into a transformer during that simulation of yourself.
Okay, so this question is to everyone.
If we could build the Matcha Scallions, the question still remains, should we? Yes. Okay, someone other than me. If we can, we will. But should we? From an ethical standpoint, should we? We have so many people suffering on this planet already.
Stephanie asks, if we build conscious AI, how do we manage their empathy? Stephanie's point is an excellent one. So, you know, if we build conscious AI, how do we know it won't be associated? How do we know that it will be empathetic and treat us well? Right. I mean, we would obviously have to test the impact of consciousness on different AI systems and not make any assumptions.
that just because in the context of one AI architecture, the machine is generous and kind, that in the context of other machine architectures, it will also be kind. We'll have to bear in mind that machines can be up to their architecture when they become super intelligent. I think somebody else had their hand up to them. Let's pass it down to our new friend from University of Kentucky.
Yeah, so I have a very quick novel argument as to why we should build conscious and super-intelligent AI. So if we build conscious or super-intelligent AI, then it might as well be omniscient. If it is as super-intelligent as we might imagine it being in a singularity, then it might as well be omniscient.
Now, if it's omniscience and it's all knowing, it follows that it's also omnipotent, right? Because if you know everything, then you know how to do everything, right? It's all knowledge, ragsport, theoretical. Now, if it's omniscience and all powerful, or omnipotent,
The other thing that's left is omni-benevolent, right? Because if we are conscient about morality, then the more rational we are, the more moral we are, right? And if we're utilitarian about morality, then we're better at figuring out how to maximize utility if we are more rational, for we have better calculative ability. So whether or not you're a conscient or utilitarian,
It still follows that the more rational you are, the more capacity you have to be moral, right? I will be designing what we've always traditionally have thought of as divinity. And if it's Omnibus Netherlands, which follows from Omniscience, then why not, right? It will bring about the right moral state, or I guess the right moral conditions for all of us to thrive. So that's my quick argument.
Thank you very much. That was really interesting. So he was alluding to a lot of issues in theology about the definition of God, very suggested. So, you know, one thing I want to point out, and we just have to move on really quickly, though, is just because a machine is super intelligent does not involve entailed and it's all knowing, all powerful, and all in England, right? Superintelligence is defined as simply outsmarting
humans, any single human in every respect, scientific reasoning, social skills, and more. But it does not in all entail that that entity has the classic traits in the Judeo-Christian tradition of God. Okay, so that said, let me turn to our next question. And again, this is for the audience. So going back to Dave's wonderful paper, which really got things going.
One thing I, these very curious guys, I was hearing this, as I watched the whole Blake LeMoyne mess unfold, I sort of wondered what was going on, and I started to go down the rabbit hole and listening to LeMoyne's podcasting, where he's invited on a lot of shows and just listening to the details of his interaction with Google behind the scenes. So one thing that he reports is that they have a reading group over
at Google, some very smart engineers who were designing Lambda, and they were studying philosophy. So over there, they were reading Daniel Dennett on consciousness, David Chalmers, and so on, studying it. I thought that was cool. I thought that was really cool. And the interesting thing, too, was in the media, Lemoine was characterized as being somewhat of a religious fanatic.
But if you listen to the recents he provided, he was a computational functionalist who had been reading a lot of Dan O'Denna's argumentation with straight from consciousness and related texts. So what I have as a question for everybody is, given Google's reaction to the whole thing, which was to sort of silence the debate and laugh at the loin, I'm wondering
Why would Google and maybe other big tech companies not want to discuss the issue of large-language model consciousness? I'll just put that to the audience to see if there are any ideas. Thanks. I, if I can, would like to go back to the question you raised about should we idea. And I come from this as a retired surgeon and not doing bioethics for 30 years.
I'm more involved recently in changes in genetics and the technology that's available for that than I am with the computer field. But they're not completely separate. And I'm reminded of the period of eugenics in the world and in our country where it's right now for people.
at this belief that they could improve the species and make humans better with technology. And in retrospect, they were horribly misguided at doing things like sterilization and integration. We need to remember that these well-intentioned, misguided people and learn some humility. Thank you. Now back to Kurt. Sure. So I... Hello, hello.
I have two questions. I'll sneak in. My question is, what questions are we not asking about AI? For instance, we have plenty of talk here about ethics, consciousness. What else is there that we're not focusing on? It's just as exigent, the more interesting. So that's question number one. And question number two is, are we overly concerned with type one errors of the material test at the expense of type two? So that is, are we making the test so strange that we
Unfortunately, the answers to my last question became garbled and so we're not able to hear the two audience members' responses, so feel free to add your own answers in the comments section below.
The links to that are in the description, as well as all of the other talks from mind fest that is where this conference took place at the Florida Atlantic State University Center for the future mind focusing on the AI and consciousness connection.
The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people.
You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories, and build as a community our own toes. Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well.
Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments. Hey, toll listeners also gain from replaying. So how about instead re-listening on those platforms? iTunes, Spotify, Google Podcasts, whichever podcast catcher you use.
How about Captain Crunch's Crunch Berries with Breakfast?
Whoa, Dad, we're on- Crunch Island? It's Jean Laffoot! And he stole our Crunch! Quick! The Zipline! He's getting away! Throw our last Crunch Berry! Nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store
. . . . . . . .
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 7042,
"audio_duration_seconds": 3802.71,
"completed_at": "2025-12-01T00:34:31Z",
"segments": [
{
"end_time": 20.896,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
},
{
"end_time": 36.067,
"index": 1,
"start_time": 20.896,
"text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
},
{
"end_time": 64.514,
"index": 2,
"start_time": 36.34,
"text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
},
{
"end_time": 82.517,
"index": 3,
"start_time": 66.271,
"text": " Listen up, people. Over the years, I've learned how important hydration is in my workouts and everyday life. It's the key to helping my body move, recover, and just have a good time when I'm exercising and staying active. Things go even better when I'm well hydrated, before I even start moving."
},
{
"end_time": 107.022,
"index": 4,
"start_time": 82.517,
"text": " Noon Hydration doesn't want you to wait to hydrate. They want you to start hydrated. Noon Sport Hydration tablets hydrate better than water alone. Just drop them in your water, dissolve, and enjoy. They contain five essential electrolytes and come in crisp and refreshing flavors like strawberry lemonade, lemon lime, and many more. They also have non-GMO vegan and gluten-free ingredients and only one gram of sugar,"
},
{
"end_time": 126.049,
"index": 5,
"start_time": 107.022,
"text": " Noon hydrates you so you can do more, go further, and recover faster. And that means you can have a heck of a lot more fun. Since hydrated humans have the most fun, head over to ShopNow on NoonLife.com and pick some up today so you're ready for anything. Because anything can happen after noon."
},
{
"end_time": 141.51,
"index": 6,
"start_time": 127.398,
"text": " Will future language models and their extensions be conscious? In developing, conscious AI may lead to harms to human beings, may also lead to harms to these AI systems themselves. So I think there needs to be a lot of reflection about these ethical questions."
},
{
"end_time": 167.739,
"index": 7,
"start_time": 143.097,
"text": " This is a presentation by David Chalmers, as well as questions posed to him, as well as me and Susan Schneider, by you, the in-person audience earlier this year at MindFest Florida, spearheaded by Susan Schneider. All of the talks on that AI and Consciousness conference are in the description. I'll list some of them aloud. For instance, there's one by Anand Vaidya. This was his presentation on moving beyond non-dualism,"
},
{
"end_time": 197.688,
"index": 8,
"start_time": 167.739,
"text": " and integrating Indian frameworks into the AI discussion. Ben Gortzel also appeared, giving his talk on AGI timelines. You should also know that Ben Gortzel recently came on with Josche Bock for Theolocution, and that's in the description as well. There's Claudia Peisos, there's Garrett Mint, and Carlos Montemayor on Petri Minds and what it takes to build non-human consciousness. There's also a Stephen Wolfram talk on AI, chat GPT, and so on. We're also going to be releasing the second Wolfram talk, as it's considered by many of the people there, his best."
},
{
"end_time": 214.991,
"index": 9,
"start_time": 197.688,
"text": " Why? Because it was aimed at high schoolers, even the lay public. And Stephen does a great job at explaining the connection between AI and physics with his hypergraph-slash-ruliad approach. That one's taking a bit of time, unfortunately, because the audio became messed up, and so what we have to do is reconstruct it using AI."
},
{
"end_time": 240.418,
"index": 10,
"start_time": 214.991,
"text": " My name is Kurt Jaimungal and this is a podcast called Theories of Everything where I use my background in math and physics to investigate theories of everything such as loop quantum gravity and string theory and even wolframs. But as well we touch on consciousness and what role does consciousness have in fundamental law? Is it an emergent property? Is it something that's more? If you like those subjects as well as what you saw on screen then feel free to subscribe. We've been having issues monetizing the channel with sponsorship"
},
{
"end_time": 269.787,
"index": 11,
"start_time": 240.418,
"text": " Okay, so let's go ahead and get started."
},
{
"end_time": 299.889,
"index": 12,
"start_time": 270.555,
"text": " Okay, well, thank you so much, Susan, for this amazing MindFest 2023. And thanks so much to Stephen and Misha and Simone and everybody else who's been involved in putting together this conference. It's been quite a memorable event already. Yeah, so Susan just asked me to give a brief overview of this"
},
{
"end_time": 330.026,
"index": 13,
"start_time": 300.538,
"text": " Paper I wrote on could a large language model be conscious? I gave this at the NeurIPS conference, the big machine learning conference in late November and subsequently wrote it up. Maybe some people will have gotten as far as having read the first few pages, but I thought I'd just summarize some of the ideas. I actually started out when I was a grad student like 30 plus years ago working on neural networks."
},
{
"end_time": 350.708,
"index": 14,
"start_time": 330.623,
"text": " Back in the early 90s in Doug Hofstadter's AI lab did a bunch of stuff related to models of language like active-pressive transformations, models of the evolution of learning, and then I kind of got distracted by thinking about consciousness for a little while, but so it's very cool to"
},
{
"end_time": 364.104,
"index": 15,
"start_time": 350.947,
"text": " have this intersection of issues about neural networks and machine learning and consciousness become so much to the fore in the last year or so."
},
{
"end_time": 393.695,
"index": 16,
"start_time": 365.026,
"text": " One high point of interest was this big brouhaha last June, where Blake Lemoine, an engineer at Google, said he detected sentience in one of Google's AI systems, Lambda 2. Maybe it had a soul, maybe it had consciousness, and this led to a whole lot of skepticism, a whole lot of discussion. Google,"
},
{
"end_time": 420.998,
"index": 17,
"start_time": 394.002,
"text": " Themselves said our team including ethicists and technologists has reviewed Blake's concerns And we've informed him that the evidence doesn't support his claims There's no evidence that lambda was sentient lots of evidence against it. This was already very interesting. Okay evidence good What is now that they didn't actually lay out the evidence? I'm just curious what it what was the evidence?"
},
{
"end_time": 445.503,
"index": 18,
"start_time": 421.937,
"text": " In favor of his claims and what was the evidence against it and really this talk is in a way was in a way just a way to try and make sense of them of Those questions evidence for evidence against asking questions like our current language models conscious Could future language models"
},
{
"end_time": 470.998,
"index": 19,
"start_time": 446.442,
"text": " Be conscious. Also, could future extensions of language models be conscious, as when we combine language models, say, with perceptual mechanisms or with action mechanisms in a body or with database lookup and so on, and what challenges need to be overcome on the path to conscious machine learning systems at each point, trying to examine the reasons"
},
{
"end_time": 499.48,
"index": 20,
"start_time": 471.92,
"text": " So here I'll just briefly go over clarifying some of the issues, looking briefly some of the reasons in favor, looking at some of the reasons against, and draw conclusions about where we are now and about possible roadmaps between where we are now and machine consciousness. So as I understand the term consciousness, we've already had a lot of discussion of this at the conference, especially"
},
{
"end_time": 510.06,
"index": 21,
"start_time": 499.889,
"text": " Especially yesterday morning, so the ground has been prepared These words like consciousness and sentience get used in many different ways at least as I understand them"
},
{
"end_time": 536.203,
"index": 22,
"start_time": 510.555,
"text": " I basically mean subjective experience. A being is conscious if there's something it's like to be that being, that is, if it has subjective experience. Yesterday, I think it was Anand talking about Nagels, what is it like to be a bat? The question now is, what is it like, is there anything it's like to be a large language model or future extensions of large language models?"
},
{
"end_time": 563.609,
"index": 23,
"start_time": 536.476,
"text": " So, consciousness includes many different kinds, many different varieties. You can divide it into types in various ways. I find it useful to divide it into a few categories. Sensory experience, like seeing and hearing. Affective experience, the experience of value, things, you know, feeling good or bad, like pain or pleasure. Feeling emotions, like happiness and sadness. Cognitive experiences,"
},
{
"end_time": 590.623,
"index": 24,
"start_time": 563.968,
"text": " like the experience of thinking, agentive experience, experience of acting and deciding, like an agent. And all of these actually can be combined with self-consciousness, awareness of oneself. Although I take self-consciousness just to be one kind of consciousness. Awareness, consciousness of the world may be present even in simple systems that don't have self-consciousness. And some of these"
},
{
"end_time": 619.036,
"index": 25,
"start_time": 591.101,
"text": " components may be more apt to be had by language models than others. I think it's very important to distinguish consciousness from intelligence. Intelligence I understand roughly in terms of behavior and reasoning that guides behavior. Intelligence is roughly, you know, being able to do means and reasoning in order to be able to achieve multiple ends in many different environments. Ultimately a matter of behavior."
},
{
"end_time": 648.899,
"index": 26,
"start_time": 619.855,
"text": " Consciousness comes apart from that. I think, you know, consciousness may be present in systems which are fairly low on the on the intelligence scale. Certainly doesn't require anything like human level intelligence. Good chance that worms are conscious or fish are conscious. Well, sure. So the issue of consciousness isn't the same as the issue of human level artificial general intelligence. Consciousness is subjective."
},
{
"end_time": 674.548,
"index": 27,
"start_time": 649.206,
"text": " You might ask why consciousness matters. I mean, it would be nice to say, well, one reason why consciousness matters is consciousness is going to give you all these amazing capacities. Conscious systems will be able to do things that other systems can't do. That may be true, but actually right now we understand the function of consciousness sufficiently badly. There's nothing I can promise you that conscious systems can do."
},
{
"end_time": 692.346,
"index": 28,
"start_time": 674.838,
"text": " That unconscious systems can't do but one reason one fairly widely acknowledged reason why it matters is consciousness matters for morality and Moral status if I say an animal like say a fish is conscious if it can suffer that means in principle it matters how we treat the fish if it's not conscious"
},
{
"end_time": 718.2,
"index": 29,
"start_time": 692.756,
"text": " If it can't suffer and so on, then its consciousness doesn't matter. And that, again, we were talking about yesterday. So if an AI system is conscious, suddenly it enters our moral calculations. We have to think about, you know, boy, if the training we're inflicting on a machine learning system actually inflicts suffering, a possibility some people have taken seriously, we need to worry about whether we're actually creating a moral catastrophe."
},
{
"end_time": 746.015,
"index": 30,
"start_time": 718.2,
"text": " by training these systems. At the very least, we ought to be thinking about methods of dealing with these AI systems that minimize suffering and other negative experiences. Furthermore, even if conscious AI doesn't suffice for human level AI, maybe it'll just be, say, fish level or mouse level AI, it'll be one very important step on the road to human level AI, one that brings, you know,"
},
{
"end_time": 773.404,
"index": 31,
"start_time": 746.749,
"text": " One that would be if we could be confident of consciousness in an AI system, that would be a very significant step. Okay, so reasons for and again, so just go over the summarize a few reasons in favor of consciousness in current language models. And I put this in the form of asking for a certain kind of request for reasons."
},
{
"end_time": 802.295,
"index": 32,
"start_time": 773.677,
"text": " I ask a proponent of language models being conscious to articulate a feature X such that language models have X and also such that if a system has X it's probably conscious and ideally give good reasons for both of those claims. I'm not sure there is such an X but you know a few have been at least articulated. For Blake Lemoine it seemed actually what moved him the most"
},
{
"end_time": 819.701,
"index": 33,
"start_time": 802.637,
"text": " was self-report, the fact that Lambda 2 said, yes, I am a sentient system. What's your sentience and consciousness like? It would explain this. I'm sometimes happy and sad. Interestingly, in humans, verbal report is actually"
},
{
"end_time": 846.22,
"index": 34,
"start_time": 820.725,
"text": " Typically our best guide to consciousness, at least in the case of adults. Claudia was talking about, you know, the cases like infants and animals where we lack verbal reports. So you might think we're in a more difficult position. Well, actually, in the case of language models, we have verbal reports. They would think, great. They say they're conscious. We'll use that as evidence. Unfortunately, as Susan and Ed and others have pointed out,"
},
{
"end_time": 870.981,
"index": 35,
"start_time": 846.63,
"text": " You know this evidence is not terribly strong in a context where these language models have been trained on a giant corpus of Text from human beings who are of course conscious beings and who talk about being conscious So it's fairly plausible that a language model has just learned to to repeat those claims at least so in this special case"
},
{
"end_time": 895.367,
"index": 36,
"start_time": 871.459,
"text": " You know Susan and Ed's artificial consciousness test is super interesting which basically look at your reactions to thought experiments about consciousness But in this special case where they've been trained on so much Text already. I think it carries less weight. Maybe I'll skip over Seems consciousness which is relevant this conversation, you know a lot of people have been very very impressed by the conversational ability of these recent"
},
{
"end_time": 924.872,
"index": 37,
"start_time": 895.862,
"text": " These recent systems I guess chat GPT was fine-tuned for conversation unlike Unlike the basic GPT-3 model and now we have GPT-4 Which also appears to have been fine-tuned for conversation and you know the conversational ability is one of the classic criteria for for thought in AI systems articulated by Turing back in his 1950 paper on the imitation game. He basically thought if a machine behaved in"
},
{
"end_time": 953.046,
"index": 38,
"start_time": 925.213,
"text": " in a way sufficiently indistinguishable from a human, then we might as well say it can think or that it's intelligent. So these language models, I think they've not yet passed the Turing test. Anyone who wants to probe sufficiently well can find glitches and mistakes and idiosyncrasies in this system. That said, through interacting, I got access to GPT-4, through interacting with that just the last couple of days,"
},
{
"end_time": 979.633,
"index": 39,
"start_time": 953.336,
"text": " Kind of feels a lot like, I was saying for like a GPT-3, it felt like talking to a smart eight-year-old. I think now GPT-4, it's at least a teenager. Maybe it's even, it's approaching sophisticated, you know, adult in a lot of its, a lot of its conversations. Yes, it makes mistakes. You know, one Turing test you might do is ask your questions like, you know, what is the fifth word of this sentence? And apparently it always gets this kind of thing wrong. It will say something like fifth."
},
{
"end_time": 1000.759,
"index": 40,
"start_time": 980.196,
"text": " No, actually the fifth word of the sentence was word. What was the fifth letter of the third word of this sentence? It'll get that wrong. Okay, a lot of people will get that wrong too, so that's not a guarantee that it's failing the Turing test. We're getting close. I mean, that said, I think conversational ability here is relevant because"
},
{
"end_time": 1031.015,
"index": 41,
"start_time": 1001.305,
"text": " Not so much in his own right, but because it's a sign of general intelligence. If you're able to talk about a whole lot of different domains and play chess and code and talk about social situations and scientific situations, that suggests general intelligence. So a lot of people have thought there are at least correlations between domain general abilities and consciousness. Abilities you can only use, information you can only use for certain special purposes."
},
{
"end_time": 1056.288,
"index": 42,
"start_time": 1031.51,
"text": " Are not necessarily conscious information available for all kinds of different activities in different domains are often held to go along with consciousness. So that at least I think gives us some basic reasons. I take probably this one as the most serious reason for taking seriously the possibility of consciousness in these systems that said"
},
{
"end_time": 1086.067,
"index": 43,
"start_time": 1056.937,
"text": " I don't think, just looking at the positive evidence, I don't think any of this provides remotely conclusive evidence that language models are conscious, but I do think the impressive general abilities give at least some limited initial support for taking the hypothesis seriously, at least taking it seriously enough to now to look at the reasons against, to see, you know, what are these? Everyone says, okay, look, there's all kinds of evidence, these systems are not conscious. Well, okay, what are those reasons? I think they're worth"
},
{
"end_time": 1116.578,
"index": 44,
"start_time": 1086.613,
"text": " examining. So the flip side of this is to look at the reasons why language models are not conscious and the challenge here for opponents is articulate a feature x so that language models lack x and if a system lacks x it's probably not sentient and then again try and give good reasons for one and two and here we have six different reasons that I consider"
},
{
"end_time": 1144.787,
"index": 45,
"start_time": 1117.005,
"text": " in the paper. The first one, which I just considered very briefly, is the idea that consciousness requires biology, carbon-based biology. Therefore, a silicon system is not biological and lacks consciousness. That would rule out pretty much all AI consciousness, if correct, at least all silicon-based AI"
},
{
"end_time": 1171.527,
"index": 46,
"start_time": 1144.94,
"text": " Consciousness this one is a really familiar issue and philosophy goes back to issues of you know soul and the the Chinese room and All kinds of yeah, it's a very very well-trodden debates I I'm really here more interested in issues more specific to to a language model. So I'll pass over this one quickly a More maybe"
},
{
"end_time": 1199.104,
"index": 47,
"start_time": 1172.142,
"text": " Closer to the bone for for language models specifically is the issue of having senses and embodiment a standard language model has nothing like a like a human sense, you know vision and hearing and So on no sensory processing so they can't sense suggests. They have no sensory consciousness Furthermore, they lack a body If they lack a body it looks like they can't act if so"
},
{
"end_time": 1229.155,
"index": 48,
"start_time": 1199.701,
"text": " Maybe no agentive consciousness. Some people have gone on to argue that because of their lack of senses, they may not have any kind of genuine meaning or cognition at all. We need senses for grounding the meaning of our thoughts. There's a huge amount to say about this. In fact, I recently had to give a talk at the American Philosophical Association and just talk completely about this issue. But one way just to briefly"
},
{
"end_time": 1257.995,
"index": 49,
"start_time": 1229.633,
"text": " cut this issue off is to look at all of the developing extensions of language models that actually have some forms of sensory processing. For example, GPT-4 is already designed to process images. It's what's called a vision language model. And I gather, although this is not yet fully public, that it can process sound files and so on. So it's a multimodal model. This is DeepMind's Flamingo, which is another vision language model."
},
{
"end_time": 1282.91,
"index": 50,
"start_time": 1258.746,
"text": " You might say, what about the body? But people are also combining these language models with bodies now. Here's Google Seikan, where it actually, a language model controls a physical robot. Here is a, this is one of DeepMind's model, MIA, which works, controls a virtual body in a virtual world. And that's become a very"
},
{
"end_time": 1311.084,
"index": 51,
"start_time": 1283.353,
"text": " A very big thing to that connects to the issues I'll be talking about later this afternoon about virtual reality and virtual worlds, but we're already moving to a point where I think it's going to be quite standard quite soon for to have extended language models with sensors and embodiment, which will tend to overcome the objection from lack of sensors and embodiment. Another issue is the issue of world models. There's this famous criticism by"
},
{
"end_time": 1338.882,
"index": 52,
"start_time": 1311.544,
"text": " Timnit Gebru, Emily Bender and others that language models are sarcastic parrots. They just minimize text prediction error. They don't have genuine understanding, meaning, world models, self models. There's a lot to this. Again, I take world models to be the crucial part of this question because world models are plausibly something which is required for consciousness."
},
{
"end_time": 1368.643,
"index": 53,
"start_time": 1339.633,
"text": " Hi, I'm here to pick up my son Milo. There's no Milo here. Who picked up my son from school? Streaming only on Peacock. I'm gonna need the name of everyone that could have a connection. You don't understand. It was just the five of us. So this was all planned? What are you gonna do? I will do whatever it takes to get my son back. I honestly didn't see this coming. These nice people killing each other. All Her Fault, a new series streaming now only on Peacock."
},
{
"end_time": 1399.411,
"index": 54,
"start_time": 1371.664,
"text": " There's actually a lot of interesting work in interpretability, recently of actually trying to detect world models in systems. Here's someone trying to detect a world model in GBT-3 playing Othello, and they actually find some interesting models of the board. A slightly more technical issue is the question that these language models are feed-forward systems that lack memory-like internal state of the kinds you find in recurrent."
},
{
"end_time": 1427.398,
"index": 55,
"start_time": 1399.889,
"text": " Networks many theories of consciousness say that recurrent processing and a certain form of short-term memory is required for consciousness Here's a standard LSTM a standard recurrent network. Whereas transformers are largely largely feed forward they've got some quasi recurrence from this recirculation of inputs and outputs you know that said there's a I take it there's a"
},
{
"end_time": 1457.346,
"index": 56,
"start_time": 1427.961,
"text": " We don't know the architecture of GPT-4. There are rumors that it involves more recurrence than GPT-3. So this also looks like a temporary limitation. Fifth is the question of a global workspace. Perhaps the leading theory, the leading scientific theory of consciousness, Claudia talked about it yesterday, is that consciousness involves this global workspace for connecting many modules in the brain. It's not obvious that language models have these."
},
{
"end_time": 1487.858,
"index": 57,
"start_time": 1457.858,
"text": " On the other hand,"
},
{
"end_time": 1515.572,
"index": 58,
"start_time": 1488.285,
"text": " If you ask me, the feature that these many current language models lack that seems most crucial is some kind of unified agency. It looks like these language models can take on many different personas like actors or chameleons. Yeah, you can get lambda to say it's sentient. You can just as easily get it to say it's not sentient. You can get it to simulate a philosopher or an engineer or a politician."
},
{
"end_time": 1542.585,
"index": 59,
"start_time": 1516.493,
"text": " They seem to lack stable goals and beliefs of their own, which suggests a certain lack of a certain unity to which many people think consciousness requires more unity. But again, that said, you know, there's a lot of things to say about this, but there is a whole emerging literature on like agent modeling or person modeling where you develop these systems. At the very least, it's not very different. It's very easy to fine tune these systems."
},
{
"end_time": 1561.544,
"index": 60,
"start_time": 1543.029,
"text": " to act more like a single individual. But there are projects of trying to train some of these systems from the ground up to model a certain individual. And that's certainly coming. Perhaps some reason to think those systems will be more unified. Okay, so then look at those six reasons against"
},
{
"end_time": 1587.21,
"index": 61,
"start_time": 1562.125,
"text": " That's okay. Some of them are actually reasonably strong and plausible requirements for consciousness. It's reasonably plausible that current language models lack them. I think especially the last three I view as quite strong. That said, all of those reasons look quite temporary. We're already developing models of global workspace. Recurrent language models exist and are going to be developed further. More unified models."
},
{
"end_time": 1615.009,
"index": 62,
"start_time": 1587.671,
"text": " There's a clear research program there, so it's interesting to me that the strongest reasons all look fairly temporary. So, just to sum up my analysis, are current language models conscious? Well, no conclusive reasons against this, despite what Google says, but still there are some strong reasons, reasonably strong reasons, to deny that they're conscious corresponding to those requirements,"
},
{
"end_time": 1630.179,
"index": 63,
"start_time": 1615.503,
"text": " I think it would not be unreasonable to have fairly low confidence that current language models are conscious, but looking ahead 10 years, it was 2032 when I gave the talk, I guess now 2033, will future."
},
{
"end_time": 1658.268,
"index": 64,
"start_time": 1630.572,
"text": " language models and their extensions be conscious. I think there's good reason to think they may well have overcome the most significant and obstacle up and obvious obstacles to conscious. So I think it would be reasonable at least to have somewhat higher credence when asked to, you know, you shouldn't be too serious with numbers on these things. But when asked to put a number on this, I'd say I'm at least I think it's at least say a 20% chance that we'll have conscious AI by 2032. And if so, that'll be quite"
},
{
"end_time": 1687.637,
"index": 65,
"start_time": 1659.053,
"text": " Significant so conclusion questions about AI consciousness are not going away within 10 years even if we don't have human level AGI we'll have systems that are serious candidates for conscious for consciousness and meeting the challenges to consciousness in language models could actually yield a potential roadmap to conscious AI and actually here I laid out in the longer version of the paper a something of a roadmap I mean there are some"
},
{
"end_time": 1715.981,
"index": 66,
"start_time": 1688.148,
"text": " Philosophical challenges to be overcome, better evidence, better theory, better interpretability. The ethics is all important, but also some technical challenges. Rich models in virtual worlds, with robust world models, with genuine memory and recurrence, global workspace, unified person models, and so on. But you just said we had, we actually overcame those challenges. All of these challenges look eminently doable, if not done."
},
{
"end_time": 1733.609,
"index": 67,
"start_time": 1716.681,
"text": " Within the next decade or so just say by within a decade. We've got some system doesn't have to be human level a GI. Let's say mouse level capacities showing all of these features then question would that actually be enough for consciousness?"
},
{
"end_time": 1759.531,
"index": 68,
"start_time": 1734.036,
"text": " Many people will say no, but then the question is, well, if those systems are not conscious, what's missing? I think at that point, we have to take this very seriously. And by the way, we do need to think very, very seriously about the ethical question about whether it's actually okay for us to pursue. I'm not necessarily recommending this research program. It's a very serious ethical question. If in developing conscious AI,"
},
{
"end_time": 1777.944,
"index": 69,
"start_time": 1759.906,
"text": " May lead to harm to human beings, may also lead to harm to these AI systems themselves. So I think there needs to be a lot of reflection about these ethical questions and philosophers as well as AI researchers in the broader community are going to have to think about that very hard. So thanks."
},
{
"end_time": 1808.916,
"index": 70,
"start_time": 1779.804,
"text": " Unfortunately, our AI voice enhancer couldn't bring clarity to much of this Q&A section, so I'll interject to summarize the questions. I guess I was just wondering about the Unified Agency. That's the worst thing that was missing. And so one reason you might not think of that is if you think of like Netlock's AMP Bubbles machine, which had a Unified Agency, right? So it was just modeled after this AMP Bubbles, but there was no reason to think I was conscious."
},
{
"end_time": 1838.746,
"index": 71,
"start_time": 1811.152,
"text": " Okay, so this aren't bubbles machine, also sometimes known as Blockhead, after my colleague, Ned Block, who invented it, basically stores every possible conversation that one might have with, I guess, one's aren't bubbles. And just, you know, once it gets to like step 40 of the conversation, it looks over the entire history of the conversation, looks up the right answer, and, and gives it I mean, totally impossible to create a system like this, it would require a combinatorial"
},
{
"end_time": 1854.292,
"index": 72,
"start_time": 1839.377,
"text": " Explosion of memory but Ned used this to argue that systems a system could pass the Turing test the system could pass the Turing test but it quite clearly would not be conscious or intelligent now Jake is suggesting that"
},
{
"end_time": 1884.855,
"index": 73,
"start_time": 1854.957,
"text": " That a system like this might nevertheless be unified or unified in the very weak sense of being based on a certain individual. But I think if you actually look at its processing, it looks extremely disunified. To me, it's actually massively fragmented. It's got a separate mechanism for every single conversation. So I don't see the kind of mechanisms of integration there that philosophers have standardly required for unity of consciousness. It does bring up many interesting questions about what kind of unity is required for consciousness. There's no easy answer."
},
{
"end_time": 1912.773,
"index": 74,
"start_time": 1884.855,
"text": " The speaker asks, what does it take to determine whether an entity is conscious? Well, there's never any guarantee with consciousness. Philosophers have argued that we can at least imagine beings who are behaviorally, maybe even physically, just like a human being."
},
{
"end_time": 1941.032,
"index": 75,
"start_time": 1913.063,
"text": " But who are not conscious. Even when I'm interacting with you, you may give every sign of consciousness. But at least the philosophical question arises, are you conscious or are you a philosophical zombie? A system that lacks consciousness entirely. With other people, we're usually prepared to extend the benefit of the doubt. You know, other people are enough like us biologically, evolutionarily, that, you know, when they behave consciously and say they're conscious, then we've got pretty good"
},
{
"end_time": 1965.964,
"index": 76,
"start_time": 1941.527,
"text": " if not totally conclusive reasons to think they are. Now, once it comes to an AI system, well, they're unlike, their behavior may be like that of a human being, but they may still be unlike us in various ways. The internal processing of a large language model is extremely different from that in a brain. It's not just carbon versus silicon. It's like the whole architecture is different and the behavior is different."
},
{
"end_time": 1992.944,
"index": 77,
"start_time": 1966.527,
"text": " So the reasons are going to be weaker. At this point, I think this is why you actually need to start looking inside the system, going beyond behavior to think about what the processes are. Let's look at our leading current theories of consciousness, what they require for consciousness, global workspace, world model, perhaps recurrence, perhaps some kind of unity. If we can actually find all of that in addition to the behavior, I then give that very serious"
},
{
"end_time": 2007.125,
"index": 78,
"start_time": 1993.558,
"text": " Let's just say, look, it's still not conclusive proof that a system is conscious, but if you can do all that and then someone says it's not conscious, then at that point I think I can reasonably ask them, what do you take to be missing?"
},
{
"end_time": 2037.227,
"index": 79,
"start_time": 2010.06,
"text": " Thanks David. I think I want to go back to maybe your second slide. You had several points there. My question was more about affective mechanisms and cognition. So I just want to delve in and see your thoughts on affect and how it relates because, you know, of course, I think that's the missing piece of immunified agency because that agent has to be interested or involved by something. Like if you want to take an example in a lab scenario, say I've been"
},
{
"end_time": 2066.374,
"index": 80,
"start_time": 2037.227,
"text": " comes in every day wearing a blue shirt, and Sophia sees that, maybe she has a positive response to that, or the inverse of it, an embodiment of cognition, and maybe as well most by a certain color or something random in the environment, you know, so affect is what drives us, you know, what causes interest in us. The AI is not committed to saying, you know, unconscious or not conscious, it's the same to it, but what would be a driving factor, what would be affective mechanisms, what would they look like in the setting,"
},
{
"end_time": 2093.592,
"index": 81,
"start_time": 2066.783,
"text": " Yeah, it's a good question. Affective experience is obviously extremely central to human consciousness. I've actually argued that it's not absolutely central to consciousness in general. There could be a conscious being"
},
{
"end_time": 2116.374,
"index": 82,
"start_time": 2093.899,
"text": " with little or no affective experience. We already know human beings with wider and narrower affective range, but while getting into thought experiments, I quite like the thought experiments of the, I think we were talking the other night about the philosophical Vulcan, inspired by Spock from Star Trek, who was supposed to lack emotions. I mean, Spock was a terrible example because he was half human and"
},
{
"end_time": 2143.677,
"index": 83,
"start_time": 2116.783,
"text": " Often got emotional and they go through, even Vulcans go through, bonfire every few years. However, a philosophical Vulcan is a purer case. Conscious being sees, hears, thinks, reasons, but never has an affective state. I think that's perfectly coherent. Humans aren't like that, but I think it's, I think that's coherent. I think that I've argued a being like that would still have moral status. So affect, suffering is not the be all and end all for moral status. That said, affect is very,"
},
{
"end_time": 2167.773,
"index": 84,
"start_time": 2143.951,
"text": " crucial for us. What would drive a philosophical Vulcan to do what they do? Not their affective states, not feelings of happiness and frustration and so on. Rather, I think it would be more of like the kind of creature described by Immanuel Kant, who has these kind of colder, you know, goals and desires. It could still want to advance science and protect his family and so on, in the absence of"
},
{
"end_time": 2183.78,
"index": 85,
"start_time": 2168.046,
"text": " So I think it would be possible, even if we didn't have good mechanisms for affective experience in AI, I think we could still quite possibly design a conscious AI. That said, one very natural route to conscious AI is to try and build in."
},
{
"end_time": 2214.104,
"index": 86,
"start_time": 2184.36,
"text": " Carla,"
},
{
"end_time": 2240.452,
"index": 87,
"start_time": 2215.742,
"text": " I wonder what you think about this thing that I find interesting, this asymmetry, that you were talking about, mouse-level consciousness. And we're all like, oh my god, if that happens, you need to hire warriors to do these things. I mean, what do you think about this asymmetry, this creature that most of us think are conscious, should fall within the moral protections"
},
{
"end_time": 2262.261,
"index": 88,
"start_time": 2240.879,
"text": " Yeah, I mean, I think my own view is that any conscious being at least deserves moral consideration. So I think absolutely mice deserve moral consideration. Now, exactly how much moral consideration, that's itself a huge issue in its own right. Probably not as much moral consideration as a human being. That is probably some."
},
{
"end_time": 2293.131,
"index": 89,
"start_time": 2263.37,
"text": " I don't know, even a scientist running a lab where they put mice through all kinds of things, I think they at least give the mice some moral consideration. They probably try not to make the mice suffer totally gratuitously. Even that's a sign of some moral consideration. That said, they may be willing to put them through all kinds of suffering when it's scientifically useful, which means they're not giving them that much moral consideration. And I'm very prepared to believe we should give mice and fish and many animals much more moral consideration than we do exactly."
},
{
"end_time": 2321.032,
"index": 90,
"start_time": 2293.968,
"text": " what the right level is, I have, I don't really know. But yeah, I think much of the same goes for AI, maybe initially we'll have AI systems with the kind of something like maybe, I don't think it's out of the question that conscious AIs could have something like the current AI systems could have something like the conscious experience of, you know, a worm or maybe even a fish or, or, or something like this, and thereby already deserve some"
},
{
"end_time": 2348.37,
"index": 91,
"start_time": 2321.715,
"text": " moral consideration. That said, it's not with AIs, unlike, say, mice. Well, I guess in flowers from Algernon, the mouse gets to be very smart very soon and suddenly reaches human level intelligence. Doesn't happen with real mice. But with AIs, you know, it's gonna be one day mice, one day fish, next day mice, next day primates, next day humans. And, you know, obviously, when the issue is really going to hit home is when we have AI systems as sophisticated"
},
{
"end_time": 2367.892,
"index": 92,
"start_time": 2348.831,
"text": " May I jump in and follow up on that?"
},
{
"end_time": 2392.176,
"index": 93,
"start_time": 2368.507,
"text": " That's such an interesting question because you could also envision a hybrid AI system such as an animat, if you will, with a biological component as well as a non-biological component or a neuromorphic component and a non-neuromorphic component. Maybe you get something like the consciousness of a mouse, but super intelligent."
},
{
"end_time": 2410.691,
"index": 94,
"start_time": 2392.688,
"text": " I mean, I'm wondering if we should be more careful about assuming a correlation between level of consciousness and level of intelligence and what this issue does in the moral calculus of concern."
},
{
"end_time": 2438.473,
"index": 95,
"start_time": 2411.783,
"text": " Yeah, it's a great question. Do you want to throw that one? I know you wanted to throw it up into the audience at some point. I'm happy to take on this one. I think consciousness and intelligence are, to some degree, dissociable. I think that's especially so for, say, sensory consciousness, affective consciousness, and so on. I think cognitive consciousness, I am inclined to think, has got some strong correlation with consciousness, relatively"
},
{
"end_time": 2468.012,
"index": 96,
"start_time": 2438.814,
"text": " Unintelligent systems, you know, worm and so on, probably doesn't have much in the way of cognitive consciousness. Humans have far more developed cognitive consciousness. And even in sensory consciousness and affective consciousness, we have rich sensory and affective states. That's largely, I think, due in significant part to their interaction with cognitive consciousness. And I'm inclined to think people say, you know, Bentham said what matters for animals is it can they talk? Can they reason? No, it's can they suffer?"
},
{
"end_time": 2494.087,
"index": 97,
"start_time": 2468.865,
"text": " I'm actually like maybe Bentham was a little bit too fast. It's like your reasoning and cognition is actually very important for for moral status. And that I think does at least correlate with with intelligence. But affect on the other hand, yeah, maybe a mouse can be suffering hugely. And that that suffering ought to get weight now considerations to some degree independent of intelligence. I think Kirk, from theories of everything, who is the"
},
{
"end_time": 2522.466,
"index": 98,
"start_time": 2494.531,
"text": " Co-MC is going to jump in now with a question. Is that right? Sure. Is it all right if Valerie answers one question? Because I know that she has one that's been burning. A burning question. Gosh. A burning question. It is Charlie Roebuck being hypnotized. Because when you hypnotize a person, you're going into the subconscious, getting in relation to bring it forward, to see what's for the designers. What's your view?"
},
{
"end_time": 2546.766,
"index": 99,
"start_time": 2523.097,
"text": " That is a great question. I have no idea about the answer, but maybe someone here does. Anyone? Anyone hypnotized in the AI system? There are people who have done simulations of the conscious and the unconscious. You know any AI system simulating the Freudian unconscious, Claudia? Someone should be doing this for sure. This reminds me though of"
},
{
"end_time": 2563.78,
"index": 100,
"start_time": 2547.073,
"text": " Issues involving testing machine consciousness and it reminds me of, for example, Ed Turner's, and I guess it was sort of my view too, on writing a test for machine consciousness that was probing to see if there was the felt quality of experience."
},
{
"end_time": 2587.108,
"index": 101,
"start_time": 2563.78,
"text": " and actually think that cases of hypnosis, you know, if you could find that kind of phenomenon at the level of machines, it could very well be an interesting indication that something was going on. But it leads us to a more general issue that we wanted to raise with the audience, which is what methodological requirements are appropriate"
},
{
"end_time": 2601.903,
"index": 102,
"start_time": 2587.398,
"text": " for testing the machine and deciding whether a machine is conscious or not. And maybe I'll turn it over to Ed Churna for the first answer and then back there, then Gertzl for the second."
},
{
"end_time": 2620.418,
"index": 103,
"start_time": 2602.619,
"text": " Razor blades are like diving boards. The longer the board, the more the wobble, the more the wobble, the more nicks, cuts, scrapes. A bad shave isn't a blade problem, it's an extension problem. Henson is a family-owned aerospace parts manufacturer that's made parts for the International Space Station and the Mars Rover."
},
{
"end_time": 2648.916,
"index": 104,
"start_time": 2620.418,
"text": " Now they're bringing that precision engineering to your shaving experience. By using aerospace-grade CNC machines, Henson makes razors that extend less than the thickness of a human hair. The razor also has built-in channels that evacuates hair and cream, which make clogging virtually impossible. Henson Shaving wants to produce the best razors, not the best razor business, so that means no plastics, no subscriptions, no proprietary blades, and no planned obsolescence."
},
{
"end_time": 2665.265,
"index": 105,
"start_time": 2648.916,
"text": " It's also extremely affordable. The Henson razor works with the standard dual edge blades that give you that old school shave with the benefits of this new school tech. It's time to say no to subscriptions and yes to a razor that'll last you a lifetime. Visit hensonshaving.com slash everything."
},
{
"end_time": 2692.722,
"index": 106,
"start_time": 2665.265,
"text": " If you use that code, you'll get two years worth of blades for free. Just make sure to add them to the cart. Plus 100 free blades when you head to H E N S O N S H A V I N G dot com slash everything and use the code everything. Let me just quickly say the kind of meta idea behind the specific test that Susan and I published a few years ago."
},
{
"end_time": 2718.763,
"index": 107,
"start_time": 2694.104,
"text": " is as follows. Since we can't directly detect felt experience, subjective experience, you might ask, what do entities that have self-consciousness learn from this experience? Do they get any information from that experience that isn't otherwise available to them? And if so, looking for that"
},
{
"end_time": 2745.486,
"index": 108,
"start_time": 2719.531,
"text": " testing to see if they have that information would be an indirect way as a proxy for the experience space. And I think we use this with people a great deal. And the example that Susan and I turned to was almost everyone understands very easily ideas like reincarnated ghosts, out-of-body experiences, life after death."
},
{
"end_time": 2773.712,
"index": 109,
"start_time": 2746.032,
"text": " because from their experience they perceive themselves as existing as an entity that has experience as different from a physical object. So if you say to someone, you know, after you die you will be reincarnated in the body. That makes sense to people like that. If you say to them, after you die your soul will be reincarnated."
},
{
"end_time": 2798.951,
"index": 110,
"start_time": 2774.753,
"text": " That sounds inappropriate and you have to explain a lot what you could possibly mean from that. And for a variety of human experiences, you know, the felt experience of things like a broken heart and a romantic relationship, a culture shock, an anxiety attack, synesthesia is a little more exotic."
},
{
"end_time": 2829.428,
"index": 111,
"start_time": 2799.684,
"text": " If you're talking to someone, if you've had one of those experiences and you speak to someone who has not had them or who has also had them, you can tell the difference very quickly. They get it if they've already had the experience and you can go ahead and talk about what it's like to have an anxiety attack or whatever. If not, you have a lot of explaining to do to get them to understand what you're talking about. And so the sort of structure of the type of tests that Susan and I"
},
{
"end_time": 2857.261,
"index": 112,
"start_time": 2829.923,
"text": " proposed was that you isolate the machine from any information about what people say about their thought experience, and then try to get them to understand some of these concepts. Thank you, Ed. And now, staying grateful, I had a comment as well. Yeah, I've thought about this topic a fair bit, and then"
},
{
"end_time": 2876.032,
"index": 113,
"start_time": 2857.927,
"text": " Partial solution I've come up with, we can't do brain-to-brain or brain-to-machine interfacing. So I think, I mean, very broadly speaking, people talk about first-person experience, subjective feel of being yourself."
},
{
"end_time": 2901.886,
"index": 114,
"start_time": 2876.527,
"text": " second-person experience, which is more like a boomerian, eye-ballast experience. It's directly perceiving the mind of another person. The third-person experience, which is sort of objectionish, like sharing experience in the physical realm. What's interesting, we think about the somewhat hard problem of consciousness this year, the contrast of first-person experience"
},
{
"end_time": 2929.65,
"index": 115,
"start_time": 2902.278,
"text": " which is your subjective, manually felt quota with science, which in essence is about how the people with minds commonly agree that a certain item of data is in the shared perception of whatever is in that community. Once we can sort of wire Wi-Fi, our brains together, let's say that I can wire my brain with the brain of this gentleman right here, it would be an amazing experience, right?"
},
{
"end_time": 2955.282,
"index": 116,
"start_time": 2930.23,
"text": " you can increase the bandwidth of that wire, then we would feel like we were controlling twins. It seems like this sort of terminology, which is probably not super, super far off, it feels like this gives it a different dimension of view. It's somewhat bypassing the higher probability of consciousness, although not actually solving it, because it brings into the domain of"
},
{
"end_time": 2971.169,
"index": 117,
"start_time": 2955.725,
"text": " I feel this has existed in a less advanced form in the history of Buddhist and various spiritual traditions."
},
{
"end_time": 2995.213,
"index": 118,
"start_time": 2971.305,
"text": " Well, people are following common meditation protocols and psychedelic protocols, and they have the sense that they're coming seriously to the minds of other people there. Ben, that is fascinating. And you know, I've been telling my students about the craniopagus twin case, I don't know if you know about that, conjoined twins in Canada, who have a dalamic bridge. Wow."
},
{
"end_time": 3017.841,
"index": 119,
"start_time": 2995.572,
"text": " It's a novel anatomical structure that we don't have it in nature, as far as I know, previous to this, or those documented. And of course, everybody wants to study them. The parents are very protective, however, but it's well documented. They don't refuse each other, even though they're conjoined, that when one eats peanut butter, she'll do it to drive her twin crazy."
},
{
"end_time": 3038.677,
"index": 120,
"start_time": 3018.422,
"text": " I wanted to bring that over to Dave Chalmers and see what your reaction is to Ben's Oing."
},
{
"end_time": 3059.394,
"index": 121,
"start_time": 3039.377,
"text": " Oh, you know, I love the idea of mind merging as a test for consciousness. I'm not totally convinced because of course you could you could mind merge with a zombie and from the first person perspective, it would still feel great. You could probably you could probably mind merge with GPT-4 and it'd be pretty sense. So you're right, Bob. This is Ben asks,"
},
{
"end_time": 3085.572,
"index": 122,
"start_time": 3059.684,
"text": " If you mind merged with the philosophical zombie, could it feel the same as mind merging with a fully conscious being? Kind of mind merging I'm having in mind, it's like, we're still, there's still two minds here, right? I mean, I'm experiencing, I'm still me experiencing you. So it's really, you know, you're, I'm a conscious being already, and this is having massive effects on my consciousness, you know, psychedelic drugs could have massive effects on my conscious consciousness without themselves being conscious."
},
{
"end_time": 3092.312,
"index": 123,
"start_time": 3085.896,
"text": " without themselves being"
},
{
"end_time": 3120.759,
"index": 124,
"start_time": 3092.739,
"text": " So you have the idea of merging into become one common mind. I would still worry that a conscious being and an unconscious being could merge into one unified freaky conscious mind. Yeah, but now we need the criteria for which distinctive feelings are actually the feelings that track the other being. The other version I like is gradually turning yourself into that being. You know, I mean, the classic version is you replace your neurons by silicon chips."
},
{
"end_time": 3141.118,
"index": 125,
"start_time": 3120.759,
"text": " So you gradually transform yourself into a transformer during that simulation of yourself."
},
{
"end_time": 3167.449,
"index": 126,
"start_time": 3141.425,
"text": " Okay, so this question is to everyone."
},
{
"end_time": 3196.032,
"index": 127,
"start_time": 3168.916,
"text": " If we could build the Matcha Scallions, the question still remains, should we? Yes. Okay, someone other than me. If we can, we will. But should we? From an ethical standpoint, should we? We have so many people suffering on this planet already."
},
{
"end_time": 3226.578,
"index": 128,
"start_time": 3198.063,
"text": " Stephanie asks, if we build conscious AI, how do we manage their empathy? Stephanie's point is an excellent one. So, you know, if we build conscious AI, how do we know it won't be associated? How do we know that it will be empathetic and treat us well? Right. I mean, we would obviously have to test the impact of consciousness on different AI systems and not make any assumptions."
},
{
"end_time": 3252.483,
"index": 129,
"start_time": 3227.039,
"text": " that just because in the context of one AI architecture, the machine is generous and kind, that in the context of other machine architectures, it will also be kind. We'll have to bear in mind that machines can be up to their architecture when they become super intelligent. I think somebody else had their hand up to them. Let's pass it down to our new friend from University of Kentucky."
},
{
"end_time": 3278.933,
"index": 130,
"start_time": 3254.821,
"text": " Yeah, so I have a very quick novel argument as to why we should build conscious and super-intelligent AI. So if we build conscious or super-intelligent AI, then it might as well be omniscient. If it is as super-intelligent as we might imagine it being in a singularity, then it might as well be omniscient."
},
{
"end_time": 3295.196,
"index": 131,
"start_time": 3279.36,
"text": " Now, if it's omniscience and it's all knowing, it follows that it's also omnipotent, right? Because if you know everything, then you know how to do everything, right? It's all knowledge, ragsport, theoretical. Now, if it's omniscience and all powerful, or omnipotent,"
},
{
"end_time": 3315.572,
"index": 132,
"start_time": 3295.367,
"text": " The other thing that's left is omni-benevolent, right? Because if we are conscient about morality, then the more rational we are, the more moral we are, right? And if we're utilitarian about morality, then we're better at figuring out how to maximize utility if we are more rational, for we have better calculative ability. So whether or not you're a conscient or utilitarian,"
},
{
"end_time": 3337.671,
"index": 133,
"start_time": 3315.572,
"text": " It still follows that the more rational you are, the more capacity you have to be moral, right? I will be designing what we've always traditionally have thought of as divinity. And if it's Omnibus Netherlands, which follows from Omniscience, then why not, right? It will bring about the right moral state, or I guess the right moral conditions for all of us to thrive. So that's my quick argument."
},
{
"end_time": 3368.183,
"index": 134,
"start_time": 3338.268,
"text": " Thank you very much. That was really interesting. So he was alluding to a lot of issues in theology about the definition of God, very suggested. So, you know, one thing I want to point out, and we just have to move on really quickly, though, is just because a machine is super intelligent does not involve entailed and it's all knowing, all powerful, and all in England, right? Superintelligence is defined as simply outsmarting"
},
{
"end_time": 3397.159,
"index": 135,
"start_time": 3368.575,
"text": " humans, any single human in every respect, scientific reasoning, social skills, and more. But it does not in all entail that that entity has the classic traits in the Judeo-Christian tradition of God. Okay, so that said, let me turn to our next question. And again, this is for the audience. So going back to Dave's wonderful paper, which really got things going."
},
{
"end_time": 3426.271,
"index": 136,
"start_time": 3397.312,
"text": " One thing I, these very curious guys, I was hearing this, as I watched the whole Blake LeMoyne mess unfold, I sort of wondered what was going on, and I started to go down the rabbit hole and listening to LeMoyne's podcasting, where he's invited on a lot of shows and just listening to the details of his interaction with Google behind the scenes. So one thing that he reports is that they have a reading group over"
},
{
"end_time": 3452.261,
"index": 137,
"start_time": 3426.664,
"text": " at Google, some very smart engineers who were designing Lambda, and they were studying philosophy. So over there, they were reading Daniel Dennett on consciousness, David Chalmers, and so on, studying it. I thought that was cool. I thought that was really cool. And the interesting thing, too, was in the media, Lemoine was characterized as being somewhat of a religious fanatic."
},
{
"end_time": 3481.988,
"index": 138,
"start_time": 3453.524,
"text": " But if you listen to the recents he provided, he was a computational functionalist who had been reading a lot of Dan O'Denna's argumentation with straight from consciousness and related texts. So what I have as a question for everybody is, given Google's reaction to the whole thing, which was to sort of silence the debate and laugh at the loin, I'm wondering"
},
{
"end_time": 3508.968,
"index": 139,
"start_time": 3482.125,
"text": " Why would Google and maybe other big tech companies not want to discuss the issue of large-language model consciousness? I'll just put that to the audience to see if there are any ideas. Thanks. I, if I can, would like to go back to the question you raised about should we idea. And I come from this as a retired surgeon and not doing bioethics for 30 years."
},
{
"end_time": 3534.514,
"index": 140,
"start_time": 3510.435,
"text": " I'm more involved recently in changes in genetics and the technology that's available for that than I am with the computer field. But they're not completely separate. And I'm reminded of the period of eugenics in the world and in our country where it's right now for people."
},
{
"end_time": 3564.411,
"index": 141,
"start_time": 3534.957,
"text": " at this belief that they could improve the species and make humans better with technology. And in retrospect, they were horribly misguided at doing things like sterilization and integration. We need to remember that these well-intentioned, misguided people and learn some humility. Thank you. Now back to Kurt. Sure. So I... Hello, hello."
},
{
"end_time": 3593.422,
"index": 142,
"start_time": 3564.735,
"text": " I have two questions. I'll sneak in. My question is, what questions are we not asking about AI? For instance, we have plenty of talk here about ethics, consciousness. What else is there that we're not focusing on? It's just as exigent, the more interesting. So that's question number one. And question number two is, are we overly concerned with type one errors of the material test at the expense of type two? So that is, are we making the test so strange that we"
},
{
"end_time": 3619.241,
"index": 143,
"start_time": 3593.933,
"text": " Unfortunately, the answers to my last question became garbled and so we're not able to hear the two audience members' responses, so feel free to add your own answers in the comments section below."
},
{
"end_time": 3630.964,
"index": 144,
"start_time": 3619.241,
"text": " The links to that are in the description, as well as all of the other talks from mind fest that is where this conference took place at the Florida Atlantic State University Center for the future mind focusing on the AI and consciousness connection."
},
{
"end_time": 3659.906,
"index": 145,
"start_time": 3631.613,
"text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people."
},
{
"end_time": 3689.753,
"index": 146,
"start_time": 3659.906,
"text": " You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories, and build as a community our own toes. Links to both are in the description. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well."
},
{
"end_time": 3715.094,
"index": 147,
"start_time": 3690.094,
"text": " Last but not least, you should know that this podcast is on iTunes, it's on Spotify, it's on every one of the audio platforms. Just type in theories of everything and you'll find it. Often I gain from re-watching lectures and podcasts and I read that in the comments. Hey, toll listeners also gain from replaying. So how about instead re-listening on those platforms? iTunes, Spotify, Google Podcasts, whichever podcast catcher you use."
},
{
"end_time": 3744.804,
"index": 148,
"start_time": 3715.094,
"text": " How about Captain Crunch's Crunch Berries with Breakfast?"
},
{
"end_time": 3770.589,
"index": 149,
"start_time": 3746.698,
"text": " Whoa, Dad, we're on- Crunch Island? It's Jean Laffoot! And he stole our Crunch! Quick! The Zipline! He's getting away! Throw our last Crunch Berry! Nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo"
},
{
"end_time": 3778.49,
"index": 150,
"start_time": 3772.654,
"text": " Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store"
},
{
"end_time": 3802.705,
"index": 151,
"start_time": 3783.251,
"text": " . . . . . . . ."
}
]
}
No transcript available.