Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

Beyond our Consciousness: Aliens, Animals, Infants, & AI | Passos, Montemayor, Mindt

April 18, 2023 1:29:02 undefined

ℹ️ Timestamps visible: Timestamps may be inaccurate if the MP3 has dynamically injected ads. Hide timestamps.

Transcript

Enhanced with Timestamps
215 sentences 12,763 words
Method: api-polled Transcription time: 85m 58s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
[0:20] Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
[0:36] Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
[1:06] Think Verizon, the best 5G network, is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull? Jokes aside, Verizon has the most ways to save on phones and plants.
[1:23] Listen up, people. Over the years, I've learned how important hydration is in my workouts and everyday life. It's the key to helping my body move, recover, and just have a good time when I'm exercising and staying active. Things go even better when I'm well hydrated, before I even start moving.
[1:52] Noon Hydration doesn't want you to wait to hydrate. They want you to start hydrated. Noon Sport Hydration tablets hydrate better than water alone. Just drop them in your water, dissolve, and enjoy. They contain five essential electrolytes and come in crisp and refreshing flavors like strawberry lemonade, lemon lime, and many more.
[2:11] They also have non-GMO vegan and gluten-free ingredients and only one gram of sugar. Noon hydrates you so you can do more, go further, and recover faster. And that means you can have a heck of a lot more fun. Since hydrated humans have the most fun, head over to ShopNow on NoonLife.com and pick some up today so you're ready for anything. Because anything can happen after Noon.
[2:37] If we started growing these petri dish kind of mini brains to avoid experimenting on live humans, at what point have we just reached the point where we've grown a sufficiently complex mini brain to then run into the ethical dilemma that we wanted to avoid in the first place?
[2:50] Today we talk about artificial intelligence as well as evidence for consciousness. The infamous problem of other minds has embedded in it the question of how to explicitly test for consciousness. Perhaps also how to test for degrees of consciousness if consciousness isn't merely binary. That is, specifically, can we test for consciousness in infants?
[3:10] And animals. It seems obvious to most that babies are conscious, but what are the arguments for and against? And same with animals. If you think animals are conscious, then at what point does that start? For instance, are viruses conscious? Similarly, if you think animals are not conscious, then where does it end? Why are we conscious? At least seemingly, but animals are not, if that's the argument that's being made. Further, another question that's raised is, well, what rights do AIs have
[3:36] That is what ethical rights apply to emerging artificial intelligences like large language models. Claudia Pesos is an assistant professor of bioethics at NYU studying infant consciousness. Garrett Mindit is a professor of philosophy at Florida Atlantic University focused on novel information theoretic metaphysics. And Carlos Montemayor is a professor of philosophy whose research focuses on the intersection between philosophy, epistemology and cognitive science.
[4:04] This panel is from the MindFest conference brought to you by the Center for the Future Mind, filmed stylishly at the beautiful beach of Florida Atlantic University. Thank you to Susan Schneider for organizing this. We also have from that conference Stephen Wolfram, who talks about physics, consciousness and chat GPT, as well as Ben Gortzel giving a lecture on the same topic.
[4:25] My name is Kurt Jaimungal. My background is in mathematical physics and this channel is called Theories of Everything. It's dedicated to explicating the variegated landscape of theories of everything, of toes, primarily from a mathematical perspective, from a physics perspective, but we also explore the constitutive role consciousness may have in engendering the laws as we see them. Thank you to Brilliant for help subsidizing the costs, the traveling costs,
[4:48] You may not know this, but I pay out of my own personal pocket for every expense such as flight fees, taxi fees, food fees, even subscriptions such as software tools, Adobe for instance, the editor editing this right now, different capital like increased RAM and computers and so on. So help from yourself via Patreon. Patreon.com slash Kurt Jaimungal helps a tremendous, tremendous amount.
[5:10] Okay, so I will speak about infant consciousness. I'll start with this
[5:38] provocative title, are infants conscious? So this is part of my research project where I'm trying to see what theories of consciousness can tell us about infant consciousness, what are their predictions.
[5:55] But also I have a part of this project that is related to what it's like to be a newborn, what kind of phenomenology they can have. So it's trying to cover many issues in infant consciousness from theories to phenomenology.
[6:13] Are infants conscious? So we know that infants are awake, attentive, they smile to us, they move their eyes, they respond to stimuli in the environment, but are they conscious? Do they have subjective experiences? So this is a question not just for infants, but also questions for other creatures. Are machines conscious?
[6:42] are animal conscious, are cerebral organized conscious,
[6:51] Patients in vegetative states conscious. So this is a question that we call the distribution problem in philosophy. So how consciousness is distributed among other creatures. And the question here is how can you know in creatures that don't have verbal reports, that cannot tell us their feelings, how can you know if they're conscious or not?
[7:16] Can we rely on their behaviors and what kind of behaviors might indicate that they are conscious? So I'm concerned here with phenomenal consciousness.
[7:28] So, Phenomenal Consciousness is subjective experience. I think Anand's talk explained a lot for us what Phenomenal Consciousness is and the definition of Phenomenal Consciousness. I'm not bringing the definition here. I'm just concerned about if infants have this type of subjective experience.
[7:52] So are newborns, and this is relevant, I'm talking about infants, but I'm much more concerned about the beginning of life, infants at birth, newborns, when we are born, are we conscious? So this raises the problem of infant minds. So like animals, in the case of infants, we don't have verbal reports or introspective thoughts to tell us if they're conscious or not.
[8:22] So how can we know whether infants are conscious? We cannot directly observe consciousness in others, even in adults. But in the case of other humans, adults, we can rely on verbal reports. So you all here can tell me if you are conscious or not of a stimulus that I present to you. But you cannot ask infants if they are conscious or not of that stimulus. So if you cannot use verbal reports, what kind of evidence can we use?
[8:53] So this raises a type of dilemma. We cannot rely on first-person methods that we call first-person method verbal reports or introspective thoughts. We cannot rely on those in this methodology in the case of infants to measure consciousness in infants, but we know that
[9:13] third-person methods are insufficient for detect consciousness. So a behavior marker can not be sufficient to tell us if a creature is conscious or not. So which methods can we use?
[9:28] Okay, so this is my preferred method that comes from animal consciousness literature. We can combine first person methods that comes from adults, the way adults report when they are conscious of a stimulus.
[9:46] We can combine this with third-person methods, like behavior or neural markers, and from this type of evidence, from both adult-human case and infant case, we can infer that infants are conscious. So, from this methodology we can infer
[10:07] that the best way to explain that behavior or the best way to explain the neural marker we are finding in infants is that the infants are conscious. So it's the inference of the best explanation. So we start first observing correlations between consciousness and behavior or brain processes in adults.
[10:28] For this type of observation we can explain correlations in the case of adults, correlations with their brain states and correlations with their behaviors that correlate when they tell us introspectively that they are conscious.
[10:46] Through this we can isolate behaviour and neuromarkers of consciousness and once we have those behaviour and neuromarkers of consciousness we can see if infants have those same behaviour and neuromarkers and from this we can determine that the best way to explain the presence of those markers in infants is that infants are conscious.
[11:08] So I will suggest two approaches that can help us to detect those neuromarkers and behaviour markers. One approach is through behaviour, observation of behaviour and neurological signs of consciousness, neurobiological signs of consciousness. And a second approach is through fears of consciousness, how fears of consciousness can tell us
[11:33] what type of neuromarker or behaviour marker are indications of consciousness. So in this talk I won't have time to go through all the theories, so I thought the best way to introduce the topic would be to focus, and this is what I do, to focus on the first approach and I'll just say a little bit in the end something about theories of consciousness. Okay, behaviour and neurobiological signs of consciousness.
[12:00] So I focus most on pain as a past case, but everything I would say about pain could be applied with the sensory system. We can use the same type of methodology to infer consciousness through perception and sensory systems. Why I'm choosing pain?
[12:19] Because pain is a paradigmatic case in philosophy and psychology of a conscious experience. And there is a very large debate in the past whether infants feel pain or not. And we still find nowadays some philosophers and neuroscientists that sometimes raise some skeptical issues related to pain experience in infants.
[12:47] So pain in infants, we know that adults feel pain and they can report as a variety of types of experience of pains, different types of experience, but do infants feel pain?
[13:04] So behavior and neurophysiological and anatomic evidence of pain, we can find those. Presence of avoidance reactions to bodily damage like mammoths in general. Presence of specific adult human reactions. They have face expressions, they have behavior expressions, crying, and similar brain regions. I'll show one evidence related to brain regions. Similar neuromechanism are activated when they
[13:33] So what are the behavioral signs? The behavioral signs of conscious, they have outer-virtual signs, pain, crying, facial expression, body movements and avoidance reactions that become a very important marker of pain experience also to detect pain experience in animals.
[13:59] And this is a nice result from a new experimental paradigm, trying to show that infants not just have the same type of reaction, behavioral reactions to pain, with the same level or intensity of pain than an adult, this is an experimental paradigm where
[14:23] newborn and their moms, their sort of an adult and an infant were exposed to the same level and intensity of pain and they showed similar behavior reactions but also this infant shows similar brain regions being activated in
[14:41] when they are feeling pain in those cases. Adults have 20 areas of the brain activated when they feel pain and infants have 18 from those 20 areas. So pretty similar. You can see from the
[14:57] from the image that pretty similar areas are activated. So from the behavioral signs we have, the neuromarkers of pain, the best explanation for those behaviors and those neuromarkers is that infants are conscious of their pain experience. And from this you can have an argument from pain behavior. So first premise, pain experience explains
[15:27] avoidance reactions in adults. Infants and adults have similar avoidance reactions. If a pain experience explains avoidance reaction in adults, it explains similar avoidance reaction in infants. Conclusion pain experience explains avoidance reactions in infants. However, we know we cannot
[15:55] reply completely to all those skeptical considerations a sceptic can have. So, for instance, let's see this kind, the image is a little bit blurry, but the idea of this slide is to show you different faces and how it's difficult with just so different faces that express reactions that are similar to pain.
[16:23] It's hard to tell which of those babies is really feeling pain. Some of those babies might just be irritated or frustrated or feeling anger or feeling some kind of
[16:40] distress but not really pain reaction. So we know that there is a residuals challenge in this case but I think still the best explanation for this is that infants are having pain reactions as in the case of adults and this
[17:00] go together with a type of pain experience that adults can have. Although I acknowledge that resultant skepticism can come back, I think the best way to explain
[17:16] The scientific evidence and the behavioral preservation we have is that infants are having conscious experiences. So now fears of consciousness, I won't have time to really go into the details with it theory, what I want to do in this
[17:36] with this final part of the talk is just go for a kind of overview of the theories and what the theories can tell us about infant consciousness.
[17:48] So what is relevant in the case of theories of consciousness is philosophical and scientific theories that can give us sufficient or necessary conditions for consciousness. Because we need those, if the theory just explains consciousness without those
[18:07] postulates any kind of necessary sufficient conditions, it's hard for us to predict if infants are conscious or not. But some scientific theories and some philosophical theories, they come with some more objective predictions about, objective measures about consciousness. So those are the theories I
[18:34] I think are the most interesting to discuss the case of infant consciousness. So first-order representationalism, I will address one type of representationalist theory, but this could apply to many first-order representationalist theory, high-order theories, and among the scientific theories, integrated information theory and global workspace theory.
[19:00] As I said before, I don't have time to go in details how those theories, what are the kind of measure those theories propose, but I'm happy to say a little bit more in the Q&A. I would just say, given those theories, what are their predictions related to infant consciousness.
[19:26] So, representationalist theories and integrated, and I have in mind here an old version of representationalism that is Spanish, proposed by Thay, an integrated information theory, those theories clearly predict that infants are conscious, okay, for the way they suggest the sufficient and necessary conditions they propose for their theories.
[19:51] Some forms of high-order theories would raise a problem for infant consciousness. So some high-order theories that suggest that consciousness correlates with higher cognition that require representation to be represented for the creature to be conscious of that representation. This theory will be a challenge for infant consciousness.
[20:19] However, there are versions of those high-order theories that postulate less high-order cognition. For instance, the self-representationalism version of the theory could be compatible with infant consciousness. And some frontal high-order versions of global workspace theory. So global workspace theory will tell us that for a stimulus to be conscious,
[20:49] That stimulus has to be broadcast in a global workspace in the frontal areas of the brain.
[20:56] And we know that infants don't have those frontal areas well developed. Those areas are still developed. I would just say a little bit about brain development. So global workspace theory would have a problem to infer that infants are conscious if infants don't have the prefrontal areas. But I still think that a less high-order version of global workspace theory might be compatible with infant consciousness.
[21:23] We can combine this with evidence from synoptogenesis from brain development. This evidence from synoptogenesis shows us that the brain develops first or the areas that the brain is
[21:41] are areas of the brain that are activated first, are areas related to sensory and motor cortex. And just later, prefrontal cortex and all the areas that are related to cognition will be activated. So given that,
[22:00] My assessment is there is independent reason to think that high order theories in front of global workspace theories impose a kind of overly demanding condition for consciousness because impose this idea that you need high order cognition for consciousness. It's independently implausible that consciousness requires those high order cognitive processes and high order concepts.
[22:30] So phenomena consciousness often involves sensory consciousness or sensory experience without those higher thoughts. And if this is right, the theories that require as
[22:46] sufficient and necessary conditions, higher cognitive thoughts or higher concepts are fears that might be overly demanding for consciousness. So if so, the most plausible fears are consistent with consciousness in newborns. So conclusion, neurobiological and behavioral evidence suggests that infants are conscious at birth. The most plausible fears of consciousness also are consistent
[23:15] with consciousness in infants. A further question that I can say a little bit in the Q&A is how this methodology could be applied to the case of AI consciousness or AI systems or machine consciousness. Okay, thank you.
[23:37] Can you hear that sound? That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms.
[24:05] There's also something called Shopify Magic, your AI powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level.
[24:25] Join the ranks of businesses in 175 countries that have made Shopify the backbone of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story?
[24:50] Sign up for a $1 per month trial period at shopify.com slash theories all lowercase go to shopify.com slash theories now to grow your business no matter what stage you're in shopify.com slash theories.
[25:08] Razor blades are like diving boards. The longer the board, the more the wobble, the more the wobble, the more nicks, cuts, scrapes. A bad shave isn't a blade problem, it's an extension problem. Henson is a family-owned aerospace parts manufacturer that's made parts for the International Space Station and the Mars Rover.
[25:26] Now they're bringing that precision engineering to your shaving experience. By using aerospace-grade CNC machines, Henson makes razors that extend less than the thickness of a human hair. The razor also has built-in channels that evacuates hair and cream, which make clogging virtually impossible. Henson Shaving wants to produce the best razors, not the best razor business, so that means no plastics, no subscriptions, no proprietary blades, and no planned obsolescence.
[25:55] It's also extremely affordable. The Henson razor works with the standard dual edge blades that give you that old school shave with the benefits of this new school tech. It's time to say no to subscriptions and yes to a razor that'll last you a lifetime. Visit hensonshaving.com slash everything.
[26:11] OK, thanks. I will talk a bit about many of the topics that Claudia
[26:40] talked about in the context of animal consciousness. So one thing that I want to address is how interesting it is that we have very asymmetric intuitions about animals and machines. So we're very inclined to fantasize about artificial intelligence becoming conscious, but we know animals are conscious and we don't give them moral standing or legal standing. So I want to address that a little bit and then talk about the kind of considerations that Claudia was talking about.
[27:10] So one thing that is important is, in the context of talking about intelligence, I'm going to talk a little bit more about intelligence in general. It is not absolutely obvious that consciousness entails intelligence. And it is not obvious that intelligence necessitates particular kinds of phenomenal consciousness.
[27:26] And I think that it's interesting here to talk about two different aspects. So in artificial intelligence, people talk about agency, agents being intelligent. But rarely do they think that these agents are conscious. They just think that they're agents that can solve problems. And animals definitely have that. We have that. But animals on top of that have phenomenal consciousness. So I want to distinguish that and then mention this distinction between access and phenomenal consciousness.
[27:57] And I think of this distinction in terms of two different kinds of cognitive grounding for agency. So an agent has intentions to act and those intentions to act are relevant for how that agent solves their problems.
[28:12] And it's not obvious that you need to be phenomenally conscious in the relevant philosophical sense to do that. It's also not obvious that you don't need that, right? So it's just an interesting question to ask. So one in the literature, I mean, I will mention a few people that talk this way, but in the literature on animal cognition, one thing that you can say about animals that are conscious is that they have a kind of anchoring provided by access functions,
[28:40] You can think of that, this is still philosophical work happening now, but for example Daniel Stoldier thinks of access consciousness as a kind of integrated attention.
[28:53] and you can distinguish between consciousness and attention psychologists do all the time and so that kind of anchoring gives you a kind of agency that is epistemically important because it allows you to solve many problems but also there's a kind of anchoring that is provided by phenomenal consciousness which is the relevant notion of consciousness that of course Nagel and Dave Chalmers have contributed to and this kind of anchoring provides a kind of familiarity
[29:19] that we would not have without the what is it like to be off, what is it like to experience pain, etc. So when we experience pain in the example Anand was talking about, it's not just that we're sensing bodily damage, we are familiar with something badly happening to us.
[29:39] Okay, so that also complicates the picture with respect to the kinds of preferences, needs, styles of cognition, styles of mental agency. It also makes us more vulnerable than other systems to conflicts between these kinds of values and goals. And in other terms, this broad distinction between access and phenomenal consciousness seems to entail a kind of tube
[30:07] seems to tell two different kinds of perspective or perspectives on the world, which is another way people think about the first-person point of view. And, of course, in philosophy the first-person point of view that really matters is the one of phenomenal consciousness, because that's the one that seems to be reducible to function.
[30:25] Okay, so Phenomenal Consciousness, we've talked about it in the Nagel original article. Nagel talks about bats. Of course, that's a very famous thing about Tom Nagel, that his paper is called What is it like to be a bat? We think animals, most of them have this kind of Phenomenal Consciousness. And moreover,
[30:45] contributions after Nagel and again by Dave Chalmers but also other philosophers is that it's not just that there's something that is like to be a conscious animal or a conscious creature, is that the content of what is it like is extremely specific to our experience. So there's something very specific about what is it like for us to experience pain of a certain kind, what is it like to experience Lyme when we're eating kilan pie or something like that and
[31:12] The question that is interesting here is where to draw the, where to create a cut-off in the animal kingdom. It could be greater, it could be like, you know, these animals are not conscious, these are conscious. In a very recent paper that I commented on, some authors want to say that most crustaceans are conscious in the phenomenal conscious sense that matters to philosophers because they clearly experience pain according to many metrics.
[31:39] It's kind of hard to think about bees doing that. I mean, you heard some of the philosophical views. If you're a panpsychist, you can be either an intentionalist and then the question of whether you're phenomenally conscious or not is up for grabs. But there's many things you can say about this, but in the literature on animal cognition, what researchers are interested in is can we have a set of measures that we commonly use to identify consciousness in humans and apply them to a set of animals that we never protect
[32:08] because we think they're kind of like machines or I don't know what we think about crustaceans. But the idea is crustaceans count as conscious in the phenomenal sense. And again, this is interesting because of the asymmetry we have in this in our intuitions that we don't think we it's kind of funny, we think that if AI if chat GPT-4 becomes conscious somehow,
[32:31] That it deserves to have rights, right? It deserves because it would be kind of conscious. But we never do that with animals, even though our intuitions are clearly that they're conscious. I mean, that's the beginning of the Tom Nagel paper. Obviously bats are conscious, right? So that's just a funny asymmetry that we need to think about.
[32:54] Now, phenomenal consciousness comes with, and again, this is something that Anand said a lot about, so I'm not going to stop here. More recent work by Dave, again, talks about this. Phenomenal needs come with a specific kind of familiarity that is very rich in content, right? So Nick Humphrey says that is what makes our lives meaningful. Like if we lost consciousness, we might be able to be intelligent, but our lives wouldn't be as meaningful or as valuable
[33:22] And we certainly, according to Comfrey, would not have aesthetic experiences. According to many others, we wouldn't have moral capacities of the relevant kind. Hume said, without experienced empathy, we would not be able to develop our moral capacities. Other authors, Robert Sapolsky and Franz de Baal, said these capacities that are empathic, you can find them in many animals, most animals even,
[33:48] And they are crucial for a sense of familiarity and social bonding. And so again, the idea is not just that some animals are conscious. Some animals are conscious in a way that really resembles the way we're conscious. So it's just a tricky question. What kind of standing we're going to give them if we don't want to give them moral or legal standing, right?
[34:15] There's also this other thing that, I mean, this is a funny way of parsing things. It's a way that I like to parse things because it speaks to the two perspectives that I'm talking about. The familiarity perspective of phenomenal consciousness that makes our lives valuable and all that. And this other epistemic perspective where what you have here is what philosophy of mind was all about during the periods of like representationalism versus other views.
[34:41] which is, you have the mind, I mean, this is very much related to Brentano's notion of the mind, that Anand also talked about the intentionality of the mind, that our minds are about something, well, they're about something because they're representational engines, because they're representational things. So they, our minds represent the environment, our perceptual capacities represent our environment through accuracy and reliability functions. So if I want some water, which I definitely do,
[35:11] I need to represent the glass. I need to pick the glass in the right way. Those are parts of my perceptual representational capacities. There are justificatory capacities. So a very important epistemic need is to justify our reasoning and to give reasons to each other. So if I want to get to the campus and you tell me that I need to take a shuttle and the shuttle doesn't exist, then I'm going to say, well, what justified you to say that? What reasons did you have? That's kind of part of our linguistic practices.
[35:39] And I mean, the idea, I mean, at least according to some philosophers, the representationalist that Claudia mentioned, you probably can't do a lot of that without phenomenal consciousness. That's a question. How much of that can you do? That relevant question for AI too, right? The cooperative and sort of more heavily social epistemic skills, they do seem to require some different kind of needs to like
[36:07] have collective action, collective attention, collective forms of dependence and goal oriented behavior. But again, many animals do this. They do it in a way that probably even though even if they're conscious, if they were not conscious, they will still do it. So just as a quick example, very famous example in the animal literature, bees do a lot of these things, right? Bees
[36:35] They don't have language, but they communicate in very precise ways. They have things like the waggle dance, which they interpret linguistically, so that doesn't look completely trivial.
[36:55] Just think about you being with some other folks in a forest trying to get around it. It's not trivial. These tiny creatures do it very precisely. And so you can say, well, are bees phenomenally conscious? Maybe, maybe not. But do they satisfy this capacity? Yeah, they do. So in the literature, I don't want to bore you too much with this, but in the literature on phenomenal consciousness, one of the main examples comes from Frank Jackson. It's an old example.
[37:24] that you can find other authors, but Frank Jackson made it very salient with this example of Mary, who senses, well, she's an expert in color perception, but she's never experienced red herself. And kind of the idea here is she was satisfying different kinds of needs before she experienced red for the first time. She was representing red, she was knowing things about red, she was a neuroscientist, neurosurgeon. When she experiences red for the first time, going back to that life,
[37:52] that I was having before. It's not like she's going to satisfy moral needs, but she's going to satisfy new needs like, oh my God, the sunset, the reddish sunset looks so pretty. And it's just a serious question to think like, what, how to think about intelligence in ways in which the picture of what makes an agent intelligent
[38:19] really speaks to what kind of perspective they have in the world, rather than what sets of problems they can solve, or whether they fall on a metric between clearly conscious or clearly unconscious. It's just a little bit more complicated when you think about what kinds of needs this agent needs to satisfy. Another thing that comes up in the literature on animal cognition that definitely matters for AI, and here I think one of the paradigmatic
[38:47] works is Margaret Bowden's work on life and intelligence is that perhaps because life comes with needs you can have machines that behave very intelligently but they will never be consciously or intelligent enough
[39:11] to count as intelligent who we are because they're not alive. There's something artificial about them. There's something not really genuine about how they're satisfying their needs. This is the question, what is the difference if there's a deep difference, a philosophically interesting difference between artificial intelligence and biological intelligence? And what she says is
[39:39] AI can pass all sorts of tests, counted as intelligent, minus one thing that is fundamental, she says, for phenomenal consciousness, which is we satisfy our needs metabolically. We are self-sufficient systems. And the way that matters is that this is another set of issues that I don't think they get much coverage, but they're also very interesting.
[40:05] It is a Kantian way of putting this, very Kantian. You cannot be intelligent if you're not autonomous. If you're not an autonomous thinker, you don't count as a member of the members of rationality. You don't have rational standing. That sounds a little bit harsh, but the idea is that autonomy comes with the self-satisfaction of needs.
[40:29] those are the needs we talked about before, and that life is the beginning of a kind of autonomy that is very non-trivial, because you're satisfying your own needs through your own metabolism, and that speaks to how you satisfy... Oh, I have five minutes. Okay, that's all wonderful. So yeah, this idea of autonomy shows up in general requirements for rationality. It definitely shows up in literal moral standing and legal standing. So one of the reasons why we give autonomy to corporations
[40:58] is not because they're sentient. Actually, the legal personhood we give to corporations is one of the most less trivial kinds of personhood, because they have a lot of power, they have a lot of money, they can do a lot of things. We protect them legally, not because they're sentient, but because they have autonomy, a very non-trivial kind of political legal autonomy. In the moral realm, again, we have theories in philosophy that say
[41:26] We need to protect these agents because they're conscious, because they're phenomenally conscious. But there are theories, like Kant's own theory, that say we need to protect humans, not necessarily because they're phenomenally conscious, a word that Kant doesn't use, but because they are ends in themselves. They're autonomous beings that can give themselves rules for rationality and follow their own rules and see that those rules are justified. That makes them rational and autonomous.
[41:56] versus just following rules by rote, right, like because someone else told you. Not a canteen, by the way, but since. Okay, so one other way of doing this thing that we're trying to do with this panel is babies are somewhere in like, you know,
[42:18] not very autonomous, but definitely conscious, or maybe not as conscious as we are, but conscious in a relevant sense. And then there are other systems, right, that plants, they seem, I mean, so you can have like complete non-agential control. There's no agent there, like there's a hurricane. Yeah, that looks like a self-sufficient, self-sustaining thing, but it's just like not really an agent. Then there's plants. Plants definitely look a little bit more interesting when it comes to autonomy and metabolism.
[42:48] Then there's homeostasis, which is something that matters to a few views on consciousness. That's like the equilibrium of your vital functions that seems to be important for phenomenal consciousness. And then for different kinds of autonomies, I think how time is expressed in the lifespan of an agent and how an agent represents time to herself really matters to her autonomy.
[43:16] In the literature of animal cognition, one big topic is what kind of temporal representation animals have. There are people that have very strong views about this and say, if you don't have language in the picture, you're stuck in time. Animals, including animals that are clearly conscious, like elephants, animals that are clearly conscious and have long-term planning, don't really qualify as
[43:40] time travelers like us, like mental time travelers, because they lack linguistic capacities, right? Many other people would say, no, that's not true. The, you know, the perspective, the temporal perspective of many animals is very rich. With respect, I mean, this is the one thing that I find very interesting about Claudia's research, I mean, among many others, but for this talk is that
[44:05] There really seems to be something important about how children are conscious when you think about animal cognition because many psychologists, I mean among them Alison Gopnik, are struck by the fact that we have very long childhoods. Many children don't do much of mental time traveling but the kind of flexibility we have in our childhoods and the kind of
[44:31] Whatever kind of consciousness we have in our childhood seems to be really favorable to kinds of meta-learning, like just learning without any specific goal, but just learning how to learn other things. And what should we learn?
[44:44] So I think there's something really important. Hear that sound? That's the sweet sound of success with Shopify. Shopify is the all encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms.
[45:14] There's also something called Shopify magic, your AI powered assistant that's like an all star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level.
[45:34] Join the ranks of businesses in 175 countries that have made Shopify the backbone of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story?
[45:59] about consciousness in children.
[46:22] And just to complicate things more, we not only have language and rationality and all the crème de la crème epistemic things that we study, we also have autobiographical memory. We're highly individualistic creatures that think of ourselves as unique. So what kind of need is that? What kind of autonomy is that that's also relevant?
[46:43] question. So I'm basically done. One thing that I want to say in the in the context of AI is this has that's the title of my book that it's this is my commercial, that it's, it's called the prospect of a humanitarian field, artificial intelligence, and it's open access, so you can just download the thing. The one thing that I want to say here, since we're running out of time is that this is coming up in the AI literature. So there's a paper by damasio and other authors that is called need is all you need.
[47:11] sort of upon the Attention is All You Need paper. And what they say in that paper, Antonio Damatio is a neuroscientist, very famous neuroscientist, is we need to build in vulnerability into our AI systems to make them really intelligent, modeling the kind of autonomy that biological systems have, rather than giving them like this kind of like panoptic chat GPT-4 access view of Attention is All You Need. Okay, thank you.
[47:40] So on that note, I will be talking about cerebral organoids. The title is going to be Consciousness in the Borderlands. I'll explain a little bit why exactly I'm calling it the borderlands instead of the borderline. But the question I want to look at with the talk, especially following talking about infant consciousness and talking about animal consciousness, is how do we determine consciousness at the border of the biological and the machine. So in some ways,
[48:07] Asking the question as we sort of get into this position right where we're talking about both biological systems and artificial systems and the combination of those two things. I'll mostly be talking about full cerebral organoids for this talk particularly, but I'd be interested in discussion with after the panel's finished.
[48:23] These questions about like partial organoids or say brain computer interface, brain machine interface, questions about modifying already fully developed adult brains with grown organoids. And I'll explain what organoids are so that makes sense. But just to begin, luckily, Claudia and Carlos have done much of my work for me in defining consciousness. So I'm very happy about that. Thank you. So for functional consciousness, I just want to point it out that when I say that,
[48:50] I mean something that's like third-person observable, right? So behaviors, reports about conscious states, the way that we use consciousness to navigate the world, how I interact with my environment, that's what I mean by functional consciousness, right? When I talk about phenomenal consciousness in the same way that everybody else has defined it, right? I'm talking about what it's like, right? The internal perspective, what it feels like when I taste an apple. It's unique, it picks something out about the world. I have that experience, there's something somewhat maybe ineffable about that experience.
[49:21] Before I get on to asking the question about cerebral organoids in particular, I just want to talk about what I mean by borderlands and the reason why I'm saying borderlands instead of borderline. I think a lot of times we talk about the borderline cases of consciousness. How do we distinguish systems or organisms and do they fall on the borderline of conscious or not conscious? In that situation, what I want to say is
[49:48] I actually think it's a little bit thicker that line. There's something of a land in between what we usually think of conscious or non-conscious systems and I think that's only widening as we start developing more technology and we start developing more sophisticated manipulations of the biological. In some ways that line is getting blurred over time. And so I kind of want to talk about what we do in that situation when the line starts blurring and how do we cope with that changing landscape. And so
[50:17] The question I'm interested in, or what types of systems, their systems or organisms would share key structural or organizational features. We might be reluctant, for whatever reason, we could talk about all kinds of reasons why we might be reluctant, to ascribe consciousness to them. So maybe they have very similar things to humans. We usually associate consciousness with humans or other humans.
[50:39] So maybe they share certain key structural organizational features to humans, but for whatever reason we seem a little concerned describing consciousness, right? So we've already talked about animals and infants. To a large extent, I'm on the side that most animals and most infants are conscious, right? But then we might get into the case where although they are biological systems, and maybe they share important structural organizational features to us, the example I'll be talking about being cerebral organoids, we may be reluctant to ascribe consciousness to them.
[51:07] In some way, I want to ask the question, why? Why do we have that reluctance? They share these sort of features that we would assume are important for the human case. So why are we reluctant in this Borderlands case? So just to give some examples of maybe systems that would exist in this Borderlands. One is cerebral organoids, what we call mini-brains. You might have seen that pop up places. Maybe sufficiently complex neuromorphic computers, so say computers that use computations which mimic
[51:35] say human neural interactions to form the computations, maybe certain minimally conscious organisms, right? So you might think maybe a patient who's suffering from a disorder of consciousness, like a minimally conscious state or a vegetative state. We're going to talk a little bit about that in terms of organoids and how some of the discussion about disorders of consciousness actually illuminates the question about cerebral organoids. That'll be the end of my part of the panel.
[52:01] All right, so the question is kind of where do we kind of even start when it comes to this question? And what I'm going to claim is that usually whenever we come to this question, the place we actually start is sort of this presumption of similarity. All right, and so my presumption of similarity is kind of a full concept I think we use whenever we approach this kind of a question, right? So any system or organism is structurally organizationally similar enough to the human case to warrant an ascription of consciousness, right? And so...
[52:29] Okay, and here is an animal that hopefully is not conscious coming into the room. Hopefully joining us for the panel. So my presumption of similarity about the robot dog, right, it moves, it has certain behaviors, extremely manifesting certain things. I still have a presumption that perhaps it's not conscious, but the question is why, right? So we're talking about these borderland questions. Why am I not applying, say, the presumption of similarity to this dog entering the room compared to, say, a conscious human being, all right?
[52:57] And so, again, to put this in a kind of question form, so what about systems which have certain features which are structure organizationally similar but fail to meet enough criteria for us to rely on this presumption? Presumably the reason why we're even having this panel in the first place is because we can't always rely on this presumption. There's something about ascribing consciousness to certain systems that either doesn't gel with what we think more generally, maybe in a theoretical sense, again, which I'll talk about,
[53:21] Or maybe just on a deep, visceral, personal way. Some people do not want to give consciousness to animals. I don't think we have the ability to do that. I'm pretty sure they are conscious, despite what we feel about it. But I kind of want to look at this presumption of similarity and in some ways point out maybe in ways why it's beneficial when we ask this question, in ways it's not.
[53:43] So this is where I turn to cerebral organoids. I just want to give sort of a rough definition, just so we're all on the same page, about what I mean when I use this phrase. So when I say about cerebral organoids, I'm thinking of full organoids, and we might call those mini-brains. They're propagated and cultured from human embryonic stem cells and human-included pluripotent stem cells, grown and matured to replicate specific brain regions.
[54:09] we would find it unethical, which I hopefully everybody in this room would find this unethical, to experiment on a live human hippocampus while it's functioning. So what the people thought was, well, what if we grew lab-grown brain structures so that we could see how they function actively, and not just do this passively? So they took stem cells and they started replicating specific brain regions, hippocampus being one of them, many other regions,
[54:35] And the point is, and the reason why I'm bringing in the discussion today, this process of growing specific brain regions, say in a petri dish, is becoming increasingly more sophisticated over time, right? And then that's obviously posing the question, right, there's something of an ethical dilemma. If we started growing these petri dish kind of mini brains,
[54:52] to avoid the kind of ethical implications of experimenting on live humans, at what point have we just reached the point where we've grown a sufficiently complex mini-brain to then run into the ethical dilemma that we want to avoid in the first place, right? Because in some ways, right, we're making the presumption of similarity in the case of the cerebral organoid grown in the dish. We think we can test on it and learn things about the human case because we think it's sufficiently related enough, both structurally and organizationally, to the human, who we know is conscious or we want to say is conscious, right?
[55:22] I'm not a solipsist, cards on the tables, so I think everybody here is conscious, right? And largely because of presumption of similarity, all right? And so that's what, we're going to stick with this definition of cerebral organoids. There are many different types of organoids, right? There's partial organoids, right, which might be a combination of, say, a sensory array plus a petri dish-grown neural lattice or something like that, right? There's all kinds. It's very cool. It's very interesting literature. I just started getting into it.
[55:49] So yes, and the question I want to ask is, are they conscious? Okay, so take a small brain grown in a feature dish, right? It shares the structural organizational features of the neural structures in your own brain, much less sophisticated, right? You might think maybe a one to two month embryonic growth level, right? That's kind of where it's at. The question I want you to keep in mind, right? When we're talking about, are they conscious, right? Are cerebral organoids conscious, right? Keep the presumption of similarity in mind.
[56:16] I also want to keep this quote in mind from Genet and Cohen. That should actually be Cohen and Genet 2011, sorry. So they say, we argue that all theories of consciousness that are not based on function and access are not scientific theories. A true scientific theory will say how functions such as attention, working memory, and decision-making interact and come together to form a conscious experience. Alright, in some sense though, what I want to point out is, well this seems really problematic if we're asking questions about
[56:42] these borderland cases, right? Because we've presumably come to ask this question, are cerebral organoids conscious? Precisely because they don't have the kind of functional consciousness that we might look for normally when we try to test empirically, right? For whether or not systems are conscious, right? They don't have behaviors, they don't have behavioral markers or reports, they can't tell us they're conscious, right? And so it seems like the kind of functional definition of consciousness or how a science of consciousness should go, right? Which comes from Cohen and Dennett and many other people, right?
[57:12] isn't really going to help us in this situation. So then I want to ask the question, well, are there kind of a science of consciousness that might be helpful? And does a stance like this even make sense in the Borderlands? So in some ways, with this sort of Borderlands case, I want to motivate maybe why this sort of view about consciousness being purely functional might not be particularly beneficial for answering or adjudicating these questions.
[57:41] And so, really, right, we're going back to the presumption of similarity. And I think this notion of functional consciousness is usually fine. This presumption of similarity
[57:51] When we are able to ascribe it justly, right? I have the presumption of similarity with Carlos. We've had many conversations. I'm almost 100% sure he's conscious at all times. Maybe after a few years, maybe not, but it's a different question, right? But when it comes to these borderline systems and organisms, I just don't have that same level of certainty about that presumption of similarity, although nonetheless, we still ask that question, right?
[58:15] And so even though cerebral organoids are still in the early days of R&D, we may only get to the two or three month stage of embryonic development for how complex these mini-brains are. What I'm asking is sort of the speculative question, right? The same speculative question which prompts us to wonder at what point is it unethical to test on mini-brains grown at a dish, right?
[58:34] In some sense, as the maturation and scale increases over time, when do we start to begin to apply actually that presumption of similarity? So maybe you look at a petri dish of a mini brain grown and you go, that is not similar to me, right? But then when it starts developing, I don't know, certain surface features of the brain, right? So it starts having folds and it looks more brainy. Does that when we start thinking, oh, I should apply the presumption of similarity? Well, it seems like we don't have any good cutoff for that really. Like, why would we think that certain, I don't know,
[59:03] ways that we approach this thing with this presumption should make any difference whatsoever.
[59:08] And so we cannot rely on these functions of behaviors or motor outputs or reports to determine if the system is conscious, but we nonetheless have this presumption of similarity. So with the mini brains, what then? So the question I want to ask is what do we do then? So say we do have this presumption of similarity. I think that neural tissue in a petri dish, if it's sufficiently complex enough, has enough of a similarity for us to ask that question. And so what do we do in that situation? And I think we can take a lesson
[59:35] from some of the work that's going on the disorders of consciousness literature right so in any situation where you might have vegetative state patients minimally conscious state patients how do we go about determining that those systems are conscious right and so in some sense you might say those systems although that before they were fully developed healthy adult brains maybe suffered some traumatic injury right or some degenerative disease right and they are now in a situation where they can't
[60:00] Outwardly express their internal state, their conscious state, and they can't tell you that they're conscious. How do we still determine, nonetheless, whether they are? And we have a lot of different cases, and they're pretty extreme. It's a horrifying scenario to think that you'll be locked into your own body following some traumatic accident.
[60:17] How do we determine in those systems? One way that we can do it is using a new kind of tool. It's developed by Michele Massimini and his colleagues in Milan. It's called the Perturbational Complexity Index or PCI for short. And the PCI index is inspired by integrated information theory, which Claudia brought up during her talk.
[60:40] It uses EEG to measure the disruption of neural activity using transcranial magnetic stimulation, TMS, as an intervention. All that means, it's a fancy way of saying, you have an EEG lattice to detecting activation on the scalp. You use a TMS to send a pulse, a magnetic pulse, which very briefly and very accurately disrupts, say, some neuro or neuronal group that you want to intervene on to test a hypothesis.
[61:07] And the situation where the PCI measure says that if the system has the kind of integration or differentiation necessary for consciousness, which IT thinks you have to have, then there should be a large amount of disruption in the spontaneous neural activity following this kind of TMS pulse, right? And TMS is spatially limited, right? It's happening at a very specific point. It's very accurate.
[61:29] Hear that sound?
[61:52] That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms.
[62:19] What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level.
[62:38] next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story?
[63:03] You've tested a number of healthy patients to determine the level of complexity, what this PCI measure is, to determine when consciousness happens in that system, right?
[63:31] And so the nice thing about that is that doesn't require any kind of behavioral report, doesn't require any kind of functional access to consciousness.
[63:40] In some ways, it's an objective measure of consciousness, and that's the idea, right? In these situations where we don't have this functional access, all right? And so in some ways, right, I just want to wrap up now. You know, if we only use functional behavioral markers to test consciousness, we would kind of have to banish all these systems or organisms in the borderlands. And I think that just shows in some ways how limiting it is to look at it from that third-person perspective,
[64:04] But if we adopt the kind of approach that people developing PCI do or integrated information theory, if we take the sort of internal or structuralist position, we can use tools like PCI to adjudicate systems in the borderlands. We might actually find out without a more objective measure that there's many more systems that we should count as conscious. And we can apply that presumption of similarity more justly. And on that note, I'm going to wrap up. So thank you so much.
[64:37] The audio conditions were suboptimal, thus prompting me to reiterate in post-production the question for you. The questioner was asking Claudia to expand on behavioral markers versus reflex markers.
[64:58] So one thing I didn't have time to go through is, so how those medieval markers could be markers of consciousness. And there is at least one fear of consciousness that we tell us. If the creature has what they call flexible behavior, the capacity to react with flexibility and change our behaviors regarding, for instance, a pain stimulant,
[65:27] So imagine you're feeling pain, but you have a behavior to avoid pain, and this behavior didn't really relieve the pain, you can change your strategy to avoid feeling that way. And this comes with some flexibility. Usually, if you're left conscious, you just have a kind of automatic response, but there's the same response all the time. And here, they try to change their strategies to avoid that painful sequence.
[65:56] So this is a kind of flexible behavior. For instance, representation of spheres would claim that flexible behavior is a mark of consciousness. And they can claim that this behavior mark is a mark of self-behavior and mark of consciousness. Does anyone have a question? If you'll allow me a brief comment and then a brief question. I'd like to share with you a quote from Winston Churchill in regard to your borderlands statement.
[66:26] If you think of something like chat GPT as a developing consciousness kind of similar to an infant becoming
[66:50] a child becoming more an adult and chat GPT-3 acts like a seven or eight year old and four maybe more like an adolescent making things up. Do we have some sort of obligation to them the way we would to a developing infant, a developing child? Okay, yeah, this is a great question. Thank you. I don't know if I have
[67:17] I like this image that you compare the development of new systems, 3, 4 and the next generations that compare with the development and actually this is something I have been thinking a lot.
[67:36] So, of course, if they develop consciousness at a certain point in their development, we can call this development, I think we will have obligations to them. What kind of obligations? This is another question. We would have the same obligations as we have with infants. I don't know, but, of course, some kind of obligations. Just drawing on Anandi's talk about
[68:03] what we think that consciousness has value and if a creature has conscious experiences this comes with some moral obligations, some duties to that creature because we think consciousness correlates with suffering but also we think that consciousness also correlates with something we value so we would like to at least to protect that creature in some ways
[68:32] Would you have the same obligations as we have with humans? This is, as I said, another question. But I like the idea that maybe the learning process of those systems might mirror something interesting as the learning process of infants' development. One question is maybe if they are conscious, they are conscious during the learning system,
[68:58] Maybe later on they crystallize some kind of process. We don't know when they crystallize, if it could still call them that they have unconsciousness. But maybe the learning process is where we should search for some correlations with development. Just to add very quickly to that. This asymmetry I find very interesting. First, Alan Turing in his 1950 paper talks about the child machine.
[69:28] And what he says about the child machine is not, the point of that discussion is not that a machine would pass what we call now the Turing test. What he says is that a child machine would behave like a child and start learning things spontaneously and meta-learning and being curious. So there's this really interesting discussion about how there's not a single benchmark for how children behave.
[69:57] It's not like they're just spitting out the right answer. So that's one thing. The other thing is that, again, we might find out that systems like ChatGPT or the new GPT-4 are very sophisticated in their deliverances that are related to knowledge, and that we need to protect them because of that. And using the terminology of philosophers, we need to protect them on epistemic grounds, the way we protect corporations.
[70:27] A different question, a separate question is, do they have moral standing? I think this is kind of what Claude was saying. Maybe not like children, right? We shouldn't give them moral standing. By the way, again, we don't give moral standing to animals, I mean to most, I mean to our pets maybe, but it's a very curious difference that we make. So the moral grounds for protection are very different from the systemic ones. I would say literally the same thing as you said.
[71:00] My question is for Claudia, and I'd like to push back a little bit on the topic.
[71:32] especially if the answer is negative, at least with the thinking we have currently that infants with not moral standing
[71:43] Great question, thank you. So I think you can make a case that they deserve more status, more standing because they are humans. You can have an approach that all kinds of, if they belong to our species, they deserve some more status. This could be
[72:06] a way to still, even if they are not conscious, still deserve the same type of moral considerations we have. Although no one will defend the idea that they are more responsible for any kind of behavior that comes later in the development. So I agree that even if
[72:28] We might not think that. There are some skepticals that I think that maybe they are not conscious. Some philosophers have already made this claim that either we don't know if they are conscious, we cannot know, we will never know, and some defend that they are not conscious at birth at least. Conscious comes in the picture later in development.
[72:52] I think you can still make the case that even if they are not conscious, they still deserve moral consideration, at least protect them. So for instance, even if they don't have conscious experience of pain, we might still think that the pain is damage to the system in some way. So you have to protect them even if the experience is not conscious. It's just like a reaction of the body, a reaction of their system.
[73:20] We can still make the claim that this might cause them some damage. So we still have the obligation, the duty to protect them, to not having that pain stimulus in there, protecting them in some certain way. So I think you can make the case that we can discuss the consciousness part without implicating that they don't deserve a moral status. But I can see the claim
[73:49] If you think that consciousness is really the relevant feature for attribute more standing, you're right that this raises a problem for infant consciousness, okay? But I think you can make a case like this, the part of human, of the humanity, and we can attribute to deserve, and they can deserve the same type of moral considerations as any kind of humans.
[74:17] Excellent. That was really wonderful. I want to continue to sort of practice interventionism and see if anything from my talk just intercedes with what you're doing. The first thing I want to make is a comment. It's not really a correction. Carlos, I'm going to be a little bit
[74:33] conscious about the we of the intuitions that are asymmetric, because I think precisely a large extent of the Buddhist and the jay tradition would totally disagree. They'd be like, no. The problem is that we don't see the symmetry in our
[74:49] The asymmetry is the problem in our ethical considerations. We fail to see the symmetry and live up to the standards that we're applying to ourselves across the systems. And then I was trying to extend it to the AI case. So I think part of those traditions are trying to say there should be greater symmetry and consistency across the cases. And I think they would extend that even to the AI case. So now the interventionist thing is from Garrett and Claude, in which I was curious.
[75:14] The minimal kind of consciousness and thinking about infants, when I was coming here, I was thinking, does anything about the idea of analog consciousness matter? Or could that be made relevant in the sense that I have a feeling that, you know, at least in the Advaita tradition, it wouldn't be uncommon for them to think, yeah, infants are consciousness. They have a developmental form of subtle consciousness that as they mature through different regions in their brain, gives gross consciousness.
[75:43] And that in some of the cases of the minimal things you were talking about, they would want to grant different forms of consciousness, but not other types of consciousness that can come about. And because they're making this distinction,
[75:55] between the digital and the analog, by taking the analog view, they have the opportunity of saying that. Because, for example, Claudius taught, one question I had was, when you listed up the different theories of consciousness, and you said some of these theories are consistent with the infants being conscious, I was wondering, is that use of conscious, the digital notion, that they have the same kind of consciousness
[76:17] I guess in some ways, maybe getting to the point where you're distinguishing, like, oh, well, it's got this kind of consciousness, but not that kind of consciousness, this kind of consciousness, in a sense of, I think,
[76:42] meaning why we want to be more cautious about that. I think just maybe with this reorganized case, right? It feels a little like we just want to shuffle it as a different problem, right? And in my head I go, well, no, I mean, it's the same biological type of system, right? And it's not some token instance of some wildly different system, right? It's developing human substance, right? The question is, well, presumably we develop over time, right, into a fully
[77:09] We have a kind of structural organization, which is similar to all of our parts of this group. So the question is, well, we're going to try to say there's a different kind of consciousness going on there. I feel like it's a little curious. You might want to just say, I think I just want to go on the side of caution and go. Well, it's not just in the way that you and I are at this point of, say, sufficient level of complexity. It reaches some threshold for PCI.
[77:33] I think with the full cerebral organoid, to me, I think it's precarious trying to find where that level is, where it goes, yes. In some way, just be cautious about it.
[78:02] There's a sense in which it's horrifying to have a disorder of consciousness. In some ways, it seems even more horrifying to me if we have a sufficiently complex petri dish germ brain that all of a sudden realizes, I am the thing in the ditch. You know what I mean? I go, we should try to minimize that likely as much as possible and be incredibly cautious about it, right? Yeah, so I don't know. Does that come up in the next question?
[78:24] And Darren, what if it's an animate that is a combination of something biological that is integrated into some sort of a machine learning system? Should I have it? I'm skipping, I'm sorry. Any ideas? I think in that situation, I think it brings up this interesting question, right? I guess this is kind of really an example of what you do, right? And I think it's brought up a more particular partial order, right? So you might think, we'll take a partial order, right? Open up to a completely artificial system. In some ways, it's a sort of
[78:59] The artificial modification of the organs.
[79:14] So we have these sufficiently sophisticated machines, but then we want to use the biology or the biological model. Well, what is that? Should we, should we need to classify that thing as some, you know, this is what it is? To a similar extent, I'd say, right, if we're talking about organoids in combination with AI, I'm always going to kind of go back to
[79:36] Structural organization, right? It's kind of long term. Structural organization. Well, does it have the right kind of structural organizational features that we get in the human case, right? So maybe that's a sufficient level of complexity, maybe a correct type of complexity, right? And that's a case I'll be worried about. When that gets modified, maybe that's going to make me worried about the light shutting off because you've introduced too many organoid modules to your brain or something, right? If you're messing with how it works,
[80:02] Perhaps, but I think it's just that I'd be worried about commenting more on that. Let's see. Yeah, just a clarification for Carlos. You were using, what I took you to be doing was using Kant's autonomy principle or autonomy argument in order to allow you against AI consciousness, and I was wondering if that cuts against the
[80:30] Yeah, that's really good. Encanted thoughts, right? So you can't exclude
[80:54] Well, let's not talk about camp, but autonomy matters in biology for people that, maybe I can connect something I want to say about Anna's question, because metabolism is a kind of autonomy that provides a certain perspective on the world. People that work on their bodies, view of the minds, really care about this.
[81:17] phenomenologists that care about embodiments really emphasize this. And the idea is you come up as a biological being, you have certain needs, those needs matter to you comfortably, and they provide the perspective that is autonomous, it's not just a post on you. And the question, the real question is how is that a kind of autonomy related to the more Kantian abstract rational autonomy?
[81:41] That's the one that infants seem to lack, but infants definitely have the biological one. One thing I just hinted at very quickly, which comes from the people that care about homeostasis, like Antonio Damasio, is infants definitely have the phenomenal one. They may lack the rational one. So for someone like Cannes, they may not have full autonomy, but someone that once parts out things and chop them up, like I do,
[82:05] I would say they definitely have moral standing because of the phenomenal component. They just lack full epistemic human standing because they're not fully there as, you know, communities of speakers. And the one thing that, I mean, let me know if there's a follow-up, but that's my quick answer to your question. We can talk more about that. With respect to Anna's question, if I can just... One thing you could say is
[82:27] Some kinds of consciousness, like phenomenal consciousness of pain, come with what these guys, the homeostatic people, call valence. So that's definitely animal. Pain feels really, really bad, then less bad, then kind of OK, then super good when you don't have pain. Then you have other things that are less like that. So one question could be, OK, maybe some thresholds are going to be a little bit more caught up. Are there benchmark marks that these systems will pass?
[82:55] And then others are going to look a little bit more analogous, depends on what kind of intelligence. And by the way, there's many people that think intelligence is not analog, right? So intelligence is several kinds of intelligences. And the thing that is analog is phenomenal consciousness or something like that. Or at least that's one thing. And I have to say something about your other question.
[83:16] So yeah, I totally agree that maybe some theories that postulate higher cognitive processes as necessary and sufficient conditions for consciousness, they probably are describing adult consciousness. I don't say adult, but I think at least it requires four years or five years of development for children to be able to have the kind of higher cognitive processes they claim.
[83:41] But I think they can have some less demanding versions of their own theories that could accommodate early stages, okay? And you are right that the other theories that I think is more plausible that they predict infants are conscious are theories that are the kind of necessary and insufficient congenital postulates is more related to sensory levels. And in this case, it would accommodate better infant conscious.
[84:08] I like the picture that something changes. There are two things that are relevant for infants. First, they will acquire this type of consciousness we have at a certain point. This will certainly happen. And this is interesting to understand where or when this happens and what kind of structure we need to develop this type of consciousness with introspection that adults have.
[84:31] But we still have the questions of what kind of stream of consciousness they have, what are their structures, what it means to have similar biology but not the type of introspection we have. And I think there is a rich area of exploration to understand how this type of, you call it analog consciousness, maybe this is a way to understand.
[85:10] The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, et cetera, it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well.
[85:38] If you'd like to support more conversations like this, then do consider visiting theories of everything dot org. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. Every dollar helps far more than you may think. Either way, your viewership is generosity enough. Thank you.
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 8489,
  "audio_duration_seconds": 5158.46,
  "completed_at": "2025-12-01T01:08:32Z",
  "segments": [
    {
      "end_time": 20.896,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
    },
    {
      "end_time": 36.067,
      "index": 1,
      "start_time": 20.896,
      "text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
    },
    {
      "end_time": 64.514,
      "index": 2,
      "start_time": 36.34,
      "text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
    },
    {
      "end_time": 81.374,
      "index": 3,
      "start_time": 66.203,
      "text": " Think Verizon, the best 5G network, is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull? Jokes aside, Verizon has the most ways to save on phones and plants."
    },
    {
      "end_time": 112.671,
      "index": 4,
      "start_time": 83.234,
      "text": " Listen up, people. Over the years, I've learned how important hydration is in my workouts and everyday life. It's the key to helping my body move, recover, and just have a good time when I'm exercising and staying active. Things go even better when I'm well hydrated, before I even start moving."
    },
    {
      "end_time": 131.63,
      "index": 5,
      "start_time": 112.671,
      "text": " Noon Hydration doesn't want you to wait to hydrate. They want you to start hydrated. Noon Sport Hydration tablets hydrate better than water alone. Just drop them in your water, dissolve, and enjoy. They contain five essential electrolytes and come in crisp and refreshing flavors like strawberry lemonade, lemon lime, and many more."
    },
    {
      "end_time": 156.22,
      "index": 6,
      "start_time": 131.63,
      "text": " They also have non-GMO vegan and gluten-free ingredients and only one gram of sugar. Noon hydrates you so you can do more, go further, and recover faster. And that means you can have a heck of a lot more fun. Since hydrated humans have the most fun, head over to ShopNow on NoonLife.com and pick some up today so you're ready for anything. Because anything can happen after Noon."
    },
    {
      "end_time": 169.514,
      "index": 7,
      "start_time": 157.483,
      "text": " If we started growing these petri dish kind of mini brains to avoid experimenting on live humans, at what point have we just reached the point where we've grown a sufficiently complex mini brain to then run into the ethical dilemma that we wanted to avoid in the first place?"
    },
    {
      "end_time": 190.128,
      "index": 8,
      "start_time": 170.486,
      "text": " Today we talk about artificial intelligence as well as evidence for consciousness. The infamous problem of other minds has embedded in it the question of how to explicitly test for consciousness. Perhaps also how to test for degrees of consciousness if consciousness isn't merely binary. That is, specifically, can we test for consciousness in infants?"
    },
    {
      "end_time": 216.323,
      "index": 9,
      "start_time": 190.128,
      "text": " And animals. It seems obvious to most that babies are conscious, but what are the arguments for and against? And same with animals. If you think animals are conscious, then at what point does that start? For instance, are viruses conscious? Similarly, if you think animals are not conscious, then where does it end? Why are we conscious? At least seemingly, but animals are not, if that's the argument that's being made. Further, another question that's raised is, well, what rights do AIs have"
    },
    {
      "end_time": 244.019,
      "index": 10,
      "start_time": 216.323,
      "text": " That is what ethical rights apply to emerging artificial intelligences like large language models. Claudia Pesos is an assistant professor of bioethics at NYU studying infant consciousness. Garrett Mindit is a professor of philosophy at Florida Atlantic University focused on novel information theoretic metaphysics. And Carlos Montemayor is a professor of philosophy whose research focuses on the intersection between philosophy, epistemology and cognitive science."
    },
    {
      "end_time": 265.435,
      "index": 11,
      "start_time": 244.019,
      "text": " This panel is from the MindFest conference brought to you by the Center for the Future Mind, filmed stylishly at the beautiful beach of Florida Atlantic University. Thank you to Susan Schneider for organizing this. We also have from that conference Stephen Wolfram, who talks about physics, consciousness and chat GPT, as well as Ben Gortzel giving a lecture on the same topic."
    },
    {
      "end_time": 288.2,
      "index": 12,
      "start_time": 265.435,
      "text": " My name is Kurt Jaimungal. My background is in mathematical physics and this channel is called Theories of Everything. It's dedicated to explicating the variegated landscape of theories of everything, of toes, primarily from a mathematical perspective, from a physics perspective, but we also explore the constitutive role consciousness may have in engendering the laws as we see them. Thank you to Brilliant for help subsidizing the costs, the traveling costs,"
    },
    {
      "end_time": 310.981,
      "index": 13,
      "start_time": 288.2,
      "text": " You may not know this, but I pay out of my own personal pocket for every expense such as flight fees, taxi fees, food fees, even subscriptions such as software tools, Adobe for instance, the editor editing this right now, different capital like increased RAM and computers and so on. So help from yourself via Patreon. Patreon.com slash Kurt Jaimungal helps a tremendous, tremendous amount."
    },
    {
      "end_time": 338.285,
      "index": 14,
      "start_time": 310.981,
      "text": " Okay, so I will speak about infant consciousness. I'll start with this"
    },
    {
      "end_time": 355.555,
      "index": 15,
      "start_time": 338.882,
      "text": " provocative title, are infants conscious? So this is part of my research project where I'm trying to see what theories of consciousness can tell us about infant consciousness, what are their predictions."
    },
    {
      "end_time": 373.695,
      "index": 16,
      "start_time": 355.555,
      "text": " But also I have a part of this project that is related to what it's like to be a newborn, what kind of phenomenology they can have. So it's trying to cover many issues in infant consciousness from theories to phenomenology."
    },
    {
      "end_time": 402.346,
      "index": 17,
      "start_time": 373.695,
      "text": " Are infants conscious? So we know that infants are awake, attentive, they smile to us, they move their eyes, they respond to stimuli in the environment, but are they conscious? Do they have subjective experiences? So this is a question not just for infants, but also questions for other creatures. Are machines conscious?"
    },
    {
      "end_time": 410.162,
      "index": 18,
      "start_time": 402.671,
      "text": " are animal conscious, are cerebral organized conscious,"
    },
    {
      "end_time": 436.084,
      "index": 19,
      "start_time": 411.152,
      "text": " Patients in vegetative states conscious. So this is a question that we call the distribution problem in philosophy. So how consciousness is distributed among other creatures. And the question here is how can you know in creatures that don't have verbal reports, that cannot tell us their feelings, how can you know if they're conscious or not?"
    },
    {
      "end_time": 448.046,
      "index": 20,
      "start_time": 436.084,
      "text": " Can we rely on their behaviors and what kind of behaviors might indicate that they are conscious? So I'm concerned here with phenomenal consciousness."
    },
    {
      "end_time": 472.944,
      "index": 21,
      "start_time": 448.268,
      "text": " So, Phenomenal Consciousness is subjective experience. I think Anand's talk explained a lot for us what Phenomenal Consciousness is and the definition of Phenomenal Consciousness. I'm not bringing the definition here. I'm just concerned about if infants have this type of subjective experience."
    },
    {
      "end_time": 502.244,
      "index": 22,
      "start_time": 472.944,
      "text": " So are newborns, and this is relevant, I'm talking about infants, but I'm much more concerned about the beginning of life, infants at birth, newborns, when we are born, are we conscious? So this raises the problem of infant minds. So like animals, in the case of infants, we don't have verbal reports or introspective thoughts to tell us if they're conscious or not."
    },
    {
      "end_time": 532.892,
      "index": 23,
      "start_time": 502.978,
      "text": " So how can we know whether infants are conscious? We cannot directly observe consciousness in others, even in adults. But in the case of other humans, adults, we can rely on verbal reports. So you all here can tell me if you are conscious or not of a stimulus that I present to you. But you cannot ask infants if they are conscious or not of that stimulus. So if you cannot use verbal reports, what kind of evidence can we use?"
    },
    {
      "end_time": 553.029,
      "index": 24,
      "start_time": 533.695,
      "text": " So this raises a type of dilemma. We cannot rely on first-person methods that we call first-person method verbal reports or introspective thoughts. We cannot rely on those in this methodology in the case of infants to measure consciousness in infants, but we know that"
    },
    {
      "end_time": 568.78,
      "index": 25,
      "start_time": 553.029,
      "text": " third-person methods are insufficient for detect consciousness. So a behavior marker can not be sufficient to tell us if a creature is conscious or not. So which methods can we use?"
    },
    {
      "end_time": 586.357,
      "index": 26,
      "start_time": 568.78,
      "text": " Okay, so this is my preferred method that comes from animal consciousness literature. We can combine first person methods that comes from adults, the way adults report when they are conscious of a stimulus."
    },
    {
      "end_time": 607.551,
      "index": 27,
      "start_time": 586.357,
      "text": " We can combine this with third-person methods, like behavior or neural markers, and from this type of evidence, from both adult-human case and infant case, we can infer that infants are conscious. So, from this methodology we can infer"
    },
    {
      "end_time": 628.916,
      "index": 28,
      "start_time": 607.551,
      "text": " that the best way to explain that behavior or the best way to explain the neural marker we are finding in infants is that the infants are conscious. So it's the inference of the best explanation. So we start first observing correlations between consciousness and behavior or brain processes in adults."
    },
    {
      "end_time": 646.135,
      "index": 29,
      "start_time": 628.916,
      "text": " For this type of observation we can explain correlations in the case of adults, correlations with their brain states and correlations with their behaviors that correlate when they tell us introspectively that they are conscious."
    },
    {
      "end_time": 667.978,
      "index": 30,
      "start_time": 646.135,
      "text": " Through this we can isolate behaviour and neuromarkers of consciousness and once we have those behaviour and neuromarkers of consciousness we can see if infants have those same behaviour and neuromarkers and from this we can determine that the best way to explain the presence of those markers in infants is that infants are conscious."
    },
    {
      "end_time": 693.302,
      "index": 31,
      "start_time": 668.66,
      "text": " So I will suggest two approaches that can help us to detect those neuromarkers and behaviour markers. One approach is through behaviour, observation of behaviour and neurological signs of consciousness, neurobiological signs of consciousness. And a second approach is through fears of consciousness, how fears of consciousness can tell us"
    },
    {
      "end_time": 719.923,
      "index": 32,
      "start_time": 693.302,
      "text": " what type of neuromarker or behaviour marker are indications of consciousness. So in this talk I won't have time to go through all the theories, so I thought the best way to introduce the topic would be to focus, and this is what I do, to focus on the first approach and I'll just say a little bit in the end something about theories of consciousness. Okay, behaviour and neurobiological signs of consciousness."
    },
    {
      "end_time": 739.804,
      "index": 33,
      "start_time": 720.367,
      "text": " So I focus most on pain as a past case, but everything I would say about pain could be applied with the sensory system. We can use the same type of methodology to infer consciousness through perception and sensory systems. Why I'm choosing pain?"
    },
    {
      "end_time": 766.442,
      "index": 34,
      "start_time": 739.804,
      "text": " Because pain is a paradigmatic case in philosophy and psychology of a conscious experience. And there is a very large debate in the past whether infants feel pain or not. And we still find nowadays some philosophers and neuroscientists that sometimes raise some skeptical issues related to pain experience in infants."
    },
    {
      "end_time": 783.746,
      "index": 35,
      "start_time": 767.415,
      "text": " So pain in infants, we know that adults feel pain and they can report as a variety of types of experience of pains, different types of experience, but do infants feel pain?"
    },
    {
      "end_time": 813.865,
      "index": 36,
      "start_time": 784.053,
      "text": " So behavior and neurophysiological and anatomic evidence of pain, we can find those. Presence of avoidance reactions to bodily damage like mammoths in general. Presence of specific adult human reactions. They have face expressions, they have behavior expressions, crying, and similar brain regions. I'll show one evidence related to brain regions. Similar neuromechanism are activated when they"
    },
    {
      "end_time": 839.326,
      "index": 37,
      "start_time": 813.865,
      "text": " So what are the behavioral signs? The behavioral signs of conscious, they have outer-virtual signs, pain, crying, facial expression, body movements and avoidance reactions that become a very important marker of pain experience also to detect pain experience in animals."
    },
    {
      "end_time": 863.848,
      "index": 38,
      "start_time": 839.582,
      "text": " And this is a nice result from a new experimental paradigm, trying to show that infants not just have the same type of reaction, behavioral reactions to pain, with the same level or intensity of pain than an adult, this is an experimental paradigm where"
    },
    {
      "end_time": 881.084,
      "index": 39,
      "start_time": 863.848,
      "text": " newborn and their moms, their sort of an adult and an infant were exposed to the same level and intensity of pain and they showed similar behavior reactions but also this infant shows similar brain regions being activated in"
    },
    {
      "end_time": 897.449,
      "index": 40,
      "start_time": 881.084,
      "text": " when they are feeling pain in those cases. Adults have 20 areas of the brain activated when they feel pain and infants have 18 from those 20 areas. So pretty similar. You can see from the"
    },
    {
      "end_time": 927.295,
      "index": 41,
      "start_time": 897.449,
      "text": " from the image that pretty similar areas are activated. So from the behavioral signs we have, the neuromarkers of pain, the best explanation for those behaviors and those neuromarkers is that infants are conscious of their pain experience. And from this you can have an argument from pain behavior. So first premise, pain experience explains"
    },
    {
      "end_time": 954.428,
      "index": 42,
      "start_time": 927.295,
      "text": " avoidance reactions in adults. Infants and adults have similar avoidance reactions. If a pain experience explains avoidance reaction in adults, it explains similar avoidance reaction in infants. Conclusion pain experience explains avoidance reactions in infants. However, we know we cannot"
    },
    {
      "end_time": 982.688,
      "index": 43,
      "start_time": 955.896,
      "text": " reply completely to all those skeptical considerations a sceptic can have. So, for instance, let's see this kind, the image is a little bit blurry, but the idea of this slide is to show you different faces and how it's difficult with just so different faces that express reactions that are similar to pain."
    },
    {
      "end_time": 999.855,
      "index": 44,
      "start_time": 983.029,
      "text": " It's hard to tell which of those babies is really feeling pain. Some of those babies might just be irritated or frustrated or feeling anger or feeling some kind of"
    },
    {
      "end_time": 1020.213,
      "index": 45,
      "start_time": 1000.452,
      "text": " distress but not really pain reaction. So we know that there is a residuals challenge in this case but I think still the best explanation for this is that infants are having pain reactions as in the case of adults and this"
    },
    {
      "end_time": 1036.459,
      "index": 46,
      "start_time": 1020.213,
      "text": " go together with a type of pain experience that adults can have. Although I acknowledge that resultant skepticism can come back, I think the best way to explain"
    },
    {
      "end_time": 1056.323,
      "index": 47,
      "start_time": 1036.92,
      "text": " The scientific evidence and the behavioral preservation we have is that infants are having conscious experiences. So now fears of consciousness, I won't have time to really go into the details with it theory, what I want to do in this"
    },
    {
      "end_time": 1067.5,
      "index": 48,
      "start_time": 1056.323,
      "text": " with this final part of the talk is just go for a kind of overview of the theories and what the theories can tell us about infant consciousness."
    },
    {
      "end_time": 1087.824,
      "index": 49,
      "start_time": 1068.063,
      "text": " So what is relevant in the case of theories of consciousness is philosophical and scientific theories that can give us sufficient or necessary conditions for consciousness. Because we need those, if the theory just explains consciousness without those"
    },
    {
      "end_time": 1113.37,
      "index": 50,
      "start_time": 1087.824,
      "text": " postulates any kind of necessary sufficient conditions, it's hard for us to predict if infants are conscious or not. But some scientific theories and some philosophical theories, they come with some more objective predictions about, objective measures about consciousness. So those are the theories I"
    },
    {
      "end_time": 1140.93,
      "index": 51,
      "start_time": 1114.224,
      "text": " I think are the most interesting to discuss the case of infant consciousness. So first-order representationalism, I will address one type of representationalist theory, but this could apply to many first-order representationalist theory, high-order theories, and among the scientific theories, integrated information theory and global workspace theory."
    },
    {
      "end_time": 1166.152,
      "index": 52,
      "start_time": 1140.93,
      "text": " As I said before, I don't have time to go in details how those theories, what are the kind of measure those theories propose, but I'm happy to say a little bit more in the Q&A. I would just say, given those theories, what are their predictions related to infant consciousness."
    },
    {
      "end_time": 1191.049,
      "index": 53,
      "start_time": 1166.698,
      "text": " So, representationalist theories and integrated, and I have in mind here an old version of representationalism that is Spanish, proposed by Thay, an integrated information theory, those theories clearly predict that infants are conscious, okay, for the way they suggest the sufficient and necessary conditions they propose for their theories."
    },
    {
      "end_time": 1219.002,
      "index": 54,
      "start_time": 1191.613,
      "text": " Some forms of high-order theories would raise a problem for infant consciousness. So some high-order theories that suggest that consciousness correlates with higher cognition that require representation to be represented for the creature to be conscious of that representation. This theory will be a challenge for infant consciousness."
    },
    {
      "end_time": 1248.865,
      "index": 55,
      "start_time": 1219.309,
      "text": " However, there are versions of those high-order theories that postulate less high-order cognition. For instance, the self-representationalism version of the theory could be compatible with infant consciousness. And some frontal high-order versions of global workspace theory. So global workspace theory will tell us that for a stimulus to be conscious,"
    },
    {
      "end_time": 1255.708,
      "index": 56,
      "start_time": 1249.991,
      "text": " That stimulus has to be broadcast in a global workspace in the frontal areas of the brain."
    },
    {
      "end_time": 1283.848,
      "index": 57,
      "start_time": 1256.169,
      "text": " And we know that infants don't have those frontal areas well developed. Those areas are still developed. I would just say a little bit about brain development. So global workspace theory would have a problem to infer that infants are conscious if infants don't have the prefrontal areas. But I still think that a less high-order version of global workspace theory might be compatible with infant consciousness."
    },
    {
      "end_time": 1301.237,
      "index": 58,
      "start_time": 1283.848,
      "text": " We can combine this with evidence from synoptogenesis from brain development. This evidence from synoptogenesis shows us that the brain develops first or the areas that the brain is"
    },
    {
      "end_time": 1320.282,
      "index": 59,
      "start_time": 1301.732,
      "text": " are areas of the brain that are activated first, are areas related to sensory and motor cortex. And just later, prefrontal cortex and all the areas that are related to cognition will be activated. So given that,"
    },
    {
      "end_time": 1350.111,
      "index": 60,
      "start_time": 1320.282,
      "text": " My assessment is there is independent reason to think that high order theories in front of global workspace theories impose a kind of overly demanding condition for consciousness because impose this idea that you need high order cognition for consciousness. It's independently implausible that consciousness requires those high order cognitive processes and high order concepts."
    },
    {
      "end_time": 1365.247,
      "index": 61,
      "start_time": 1350.111,
      "text": " So phenomena consciousness often involves sensory consciousness or sensory experience without those higher thoughts. And if this is right, the theories that require as"
    },
    {
      "end_time": 1394.65,
      "index": 62,
      "start_time": 1366.015,
      "text": " sufficient and necessary conditions, higher cognitive thoughts or higher concepts are fears that might be overly demanding for consciousness. So if so, the most plausible fears are consistent with consciousness in newborns. So conclusion, neurobiological and behavioral evidence suggests that infants are conscious at birth. The most plausible fears of consciousness also are consistent"
    },
    {
      "end_time": 1414.326,
      "index": 63,
      "start_time": 1395.094,
      "text": " with consciousness in infants. A further question that I can say a little bit in the Q&A is how this methodology could be applied to the case of AI consciousness or AI systems or machine consciousness. Okay, thank you."
    },
    {
      "end_time": 1445.828,
      "index": 64,
      "start_time": 1417.773,
      "text": " Can you hear that sound? That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms."
    },
    {
      "end_time": 1465.708,
      "index": 65,
      "start_time": 1445.828,
      "text": " There's also something called Shopify Magic, your AI powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level."
    },
    {
      "end_time": 1490.623,
      "index": 66,
      "start_time": 1465.708,
      "text": " Join the ranks of businesses in 175 countries that have made Shopify the backbone of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story?"
    },
    {
      "end_time": 1505.913,
      "index": 67,
      "start_time": 1490.623,
      "text": " Sign up for a $1 per month trial period at shopify.com slash theories all lowercase go to shopify.com slash theories now to grow your business no matter what stage you're in shopify.com slash theories."
    },
    {
      "end_time": 1526.578,
      "index": 68,
      "start_time": 1508.797,
      "text": " Razor blades are like diving boards. The longer the board, the more the wobble, the more the wobble, the more nicks, cuts, scrapes. A bad shave isn't a blade problem, it's an extension problem. Henson is a family-owned aerospace parts manufacturer that's made parts for the International Space Station and the Mars Rover."
    },
    {
      "end_time": 1555.043,
      "index": 69,
      "start_time": 1526.578,
      "text": " Now they're bringing that precision engineering to your shaving experience. By using aerospace-grade CNC machines, Henson makes razors that extend less than the thickness of a human hair. The razor also has built-in channels that evacuates hair and cream, which make clogging virtually impossible. Henson Shaving wants to produce the best razors, not the best razor business, so that means no plastics, no subscriptions, no proprietary blades, and no planned obsolescence."
    },
    {
      "end_time": 1571.425,
      "index": 70,
      "start_time": 1555.043,
      "text": " It's also extremely affordable. The Henson razor works with the standard dual edge blades that give you that old school shave with the benefits of this new school tech. It's time to say no to subscriptions and yes to a razor that'll last you a lifetime. Visit hensonshaving.com slash everything."
    },
    {
      "end_time": 1600.23,
      "index": 71,
      "start_time": 1571.425,
      "text": " OK, thanks. I will talk a bit about many of the topics that Claudia"
    },
    {
      "end_time": 1630.145,
      "index": 72,
      "start_time": 1600.674,
      "text": " talked about in the context of animal consciousness. So one thing that I want to address is how interesting it is that we have very asymmetric intuitions about animals and machines. So we're very inclined to fantasize about artificial intelligence becoming conscious, but we know animals are conscious and we don't give them moral standing or legal standing. So I want to address that a little bit and then talk about the kind of considerations that Claudia was talking about."
    },
    {
      "end_time": 1646.391,
      "index": 73,
      "start_time": 1630.742,
      "text": " So one thing that is important is, in the context of talking about intelligence, I'm going to talk a little bit more about intelligence in general. It is not absolutely obvious that consciousness entails intelligence. And it is not obvious that intelligence necessitates particular kinds of phenomenal consciousness."
    },
    {
      "end_time": 1676.613,
      "index": 74,
      "start_time": 1646.903,
      "text": " And I think that it's interesting here to talk about two different aspects. So in artificial intelligence, people talk about agency, agents being intelligent. But rarely do they think that these agents are conscious. They just think that they're agents that can solve problems. And animals definitely have that. We have that. But animals on top of that have phenomenal consciousness. So I want to distinguish that and then mention this distinction between access and phenomenal consciousness."
    },
    {
      "end_time": 1691.681,
      "index": 75,
      "start_time": 1677.073,
      "text": " And I think of this distinction in terms of two different kinds of cognitive grounding for agency. So an agent has intentions to act and those intentions to act are relevant for how that agent solves their problems."
    },
    {
      "end_time": 1719.855,
      "index": 76,
      "start_time": 1692.227,
      "text": " And it's not obvious that you need to be phenomenally conscious in the relevant philosophical sense to do that. It's also not obvious that you don't need that, right? So it's just an interesting question to ask. So one in the literature, I mean, I will mention a few people that talk this way, but in the literature on animal cognition, one thing that you can say about animals that are conscious is that they have a kind of anchoring provided by access functions,"
    },
    {
      "end_time": 1733.319,
      "index": 77,
      "start_time": 1720.282,
      "text": " You can think of that, this is still philosophical work happening now, but for example Daniel Stoldier thinks of access consciousness as a kind of integrated attention."
    },
    {
      "end_time": 1758.985,
      "index": 78,
      "start_time": 1733.814,
      "text": " and you can distinguish between consciousness and attention psychologists do all the time and so that kind of anchoring gives you a kind of agency that is epistemically important because it allows you to solve many problems but also there's a kind of anchoring that is provided by phenomenal consciousness which is the relevant notion of consciousness that of course Nagel and Dave Chalmers have contributed to and this kind of anchoring provides a kind of familiarity"
    },
    {
      "end_time": 1777.961,
      "index": 79,
      "start_time": 1759.309,
      "text": " that we would not have without the what is it like to be off, what is it like to experience pain, etc. So when we experience pain in the example Anand was talking about, it's not just that we're sensing bodily damage, we are familiar with something badly happening to us."
    },
    {
      "end_time": 1806.92,
      "index": 80,
      "start_time": 1779.514,
      "text": " Okay, so that also complicates the picture with respect to the kinds of preferences, needs, styles of cognition, styles of mental agency. It also makes us more vulnerable than other systems to conflicts between these kinds of values and goals. And in other terms, this broad distinction between access and phenomenal consciousness seems to entail a kind of tube"
    },
    {
      "end_time": 1823.763,
      "index": 81,
      "start_time": 1807.329,
      "text": " seems to tell two different kinds of perspective or perspectives on the world, which is another way people think about the first-person point of view. And, of course, in philosophy the first-person point of view that really matters is the one of phenomenal consciousness, because that's the one that seems to be reducible to function."
    },
    {
      "end_time": 1845.282,
      "index": 82,
      "start_time": 1825.196,
      "text": " Okay, so Phenomenal Consciousness, we've talked about it in the Nagel original article. Nagel talks about bats. Of course, that's a very famous thing about Tom Nagel, that his paper is called What is it like to be a bat? We think animals, most of them have this kind of Phenomenal Consciousness. And moreover,"
    },
    {
      "end_time": 1871.766,
      "index": 83,
      "start_time": 1845.879,
      "text": " contributions after Nagel and again by Dave Chalmers but also other philosophers is that it's not just that there's something that is like to be a conscious animal or a conscious creature, is that the content of what is it like is extremely specific to our experience. So there's something very specific about what is it like for us to experience pain of a certain kind, what is it like to experience Lyme when we're eating kilan pie or something like that and"
    },
    {
      "end_time": 1898.78,
      "index": 84,
      "start_time": 1872.312,
      "text": " The question that is interesting here is where to draw the, where to create a cut-off in the animal kingdom. It could be greater, it could be like, you know, these animals are not conscious, these are conscious. In a very recent paper that I commented on, some authors want to say that most crustaceans are conscious in the phenomenal conscious sense that matters to philosophers because they clearly experience pain according to many metrics."
    },
    {
      "end_time": 1928.541,
      "index": 85,
      "start_time": 1899.258,
      "text": " It's kind of hard to think about bees doing that. I mean, you heard some of the philosophical views. If you're a panpsychist, you can be either an intentionalist and then the question of whether you're phenomenally conscious or not is up for grabs. But there's many things you can say about this, but in the literature on animal cognition, what researchers are interested in is can we have a set of measures that we commonly use to identify consciousness in humans and apply them to a set of animals that we never protect"
    },
    {
      "end_time": 1950.998,
      "index": 86,
      "start_time": 1928.933,
      "text": " because we think they're kind of like machines or I don't know what we think about crustaceans. But the idea is crustaceans count as conscious in the phenomenal sense. And again, this is interesting because of the asymmetry we have in this in our intuitions that we don't think we it's kind of funny, we think that if AI if chat GPT-4 becomes conscious somehow,"
    },
    {
      "end_time": 1973.456,
      "index": 87,
      "start_time": 1951.544,
      "text": " That it deserves to have rights, right? It deserves because it would be kind of conscious. But we never do that with animals, even though our intuitions are clearly that they're conscious. I mean, that's the beginning of the Tom Nagel paper. Obviously bats are conscious, right? So that's just a funny asymmetry that we need to think about."
    },
    {
      "end_time": 2002.261,
      "index": 88,
      "start_time": 1974.462,
      "text": " Now, phenomenal consciousness comes with, and again, this is something that Anand said a lot about, so I'm not going to stop here. More recent work by Dave, again, talks about this. Phenomenal needs come with a specific kind of familiarity that is very rich in content, right? So Nick Humphrey says that is what makes our lives meaningful. Like if we lost consciousness, we might be able to be intelligent, but our lives wouldn't be as meaningful or as valuable"
    },
    {
      "end_time": 2027.892,
      "index": 89,
      "start_time": 2002.756,
      "text": " And we certainly, according to Comfrey, would not have aesthetic experiences. According to many others, we wouldn't have moral capacities of the relevant kind. Hume said, without experienced empathy, we would not be able to develop our moral capacities. Other authors, Robert Sapolsky and Franz de Baal, said these capacities that are empathic, you can find them in many animals, most animals even,"
    },
    {
      "end_time": 2053.695,
      "index": 90,
      "start_time": 2028.404,
      "text": " And they are crucial for a sense of familiarity and social bonding. And so again, the idea is not just that some animals are conscious. Some animals are conscious in a way that really resembles the way we're conscious. So it's just a tricky question. What kind of standing we're going to give them if we don't want to give them moral or legal standing, right?"
    },
    {
      "end_time": 2080.964,
      "index": 91,
      "start_time": 2055.913,
      "text": " There's also this other thing that, I mean, this is a funny way of parsing things. It's a way that I like to parse things because it speaks to the two perspectives that I'm talking about. The familiarity perspective of phenomenal consciousness that makes our lives valuable and all that. And this other epistemic perspective where what you have here is what philosophy of mind was all about during the periods of like representationalism versus other views."
    },
    {
      "end_time": 2109.258,
      "index": 92,
      "start_time": 2081.493,
      "text": " which is, you have the mind, I mean, this is very much related to Brentano's notion of the mind, that Anand also talked about the intentionality of the mind, that our minds are about something, well, they're about something because they're representational engines, because they're representational things. So they, our minds represent the environment, our perceptual capacities represent our environment through accuracy and reliability functions. So if I want some water, which I definitely do,"
    },
    {
      "end_time": 2139.531,
      "index": 93,
      "start_time": 2111.271,
      "text": " I need to represent the glass. I need to pick the glass in the right way. Those are parts of my perceptual representational capacities. There are justificatory capacities. So a very important epistemic need is to justify our reasoning and to give reasons to each other. So if I want to get to the campus and you tell me that I need to take a shuttle and the shuttle doesn't exist, then I'm going to say, well, what justified you to say that? What reasons did you have? That's kind of part of our linguistic practices."
    },
    {
      "end_time": 2166.476,
      "index": 94,
      "start_time": 2139.974,
      "text": " And I mean, the idea, I mean, at least according to some philosophers, the representationalist that Claudia mentioned, you probably can't do a lot of that without phenomenal consciousness. That's a question. How much of that can you do? That relevant question for AI too, right? The cooperative and sort of more heavily social epistemic skills, they do seem to require some different kind of needs to like"
    },
    {
      "end_time": 2194.872,
      "index": 95,
      "start_time": 2167.005,
      "text": " have collective action, collective attention, collective forms of dependence and goal oriented behavior. But again, many animals do this. They do it in a way that probably even though even if they're conscious, if they were not conscious, they will still do it. So just as a quick example, very famous example in the animal literature, bees do a lot of these things, right? Bees"
    },
    {
      "end_time": 2214.36,
      "index": 96,
      "start_time": 2195.282,
      "text": " They don't have language, but they communicate in very precise ways. They have things like the waggle dance, which they interpret linguistically, so that doesn't look completely trivial."
    },
    {
      "end_time": 2243.865,
      "index": 97,
      "start_time": 2215.077,
      "text": " Just think about you being with some other folks in a forest trying to get around it. It's not trivial. These tiny creatures do it very precisely. And so you can say, well, are bees phenomenally conscious? Maybe, maybe not. But do they satisfy this capacity? Yeah, they do. So in the literature, I don't want to bore you too much with this, but in the literature on phenomenal consciousness, one of the main examples comes from Frank Jackson. It's an old example."
    },
    {
      "end_time": 2272.5,
      "index": 98,
      "start_time": 2244.377,
      "text": " that you can find other authors, but Frank Jackson made it very salient with this example of Mary, who senses, well, she's an expert in color perception, but she's never experienced red herself. And kind of the idea here is she was satisfying different kinds of needs before she experienced red for the first time. She was representing red, she was knowing things about red, she was a neuroscientist, neurosurgeon. When she experiences red for the first time, going back to that life,"
    },
    {
      "end_time": 2298.882,
      "index": 99,
      "start_time": 2272.927,
      "text": " that I was having before. It's not like she's going to satisfy moral needs, but she's going to satisfy new needs like, oh my God, the sunset, the reddish sunset looks so pretty. And it's just a serious question to think like, what, how to think about intelligence in ways in which the picture of what makes an agent intelligent"
    },
    {
      "end_time": 2327.398,
      "index": 100,
      "start_time": 2299.48,
      "text": " really speaks to what kind of perspective they have in the world, rather than what sets of problems they can solve, or whether they fall on a metric between clearly conscious or clearly unconscious. It's just a little bit more complicated when you think about what kinds of needs this agent needs to satisfy. Another thing that comes up in the literature on animal cognition that definitely matters for AI, and here I think one of the paradigmatic"
    },
    {
      "end_time": 2350.606,
      "index": 101,
      "start_time": 2327.978,
      "text": " works is Margaret Bowden's work on life and intelligence is that perhaps because life comes with needs you can have machines that behave very intelligently but they will never be consciously or intelligent enough"
    },
    {
      "end_time": 2379.172,
      "index": 102,
      "start_time": 2351.118,
      "text": " to count as intelligent who we are because they're not alive. There's something artificial about them. There's something not really genuine about how they're satisfying their needs. This is the question, what is the difference if there's a deep difference, a philosophically interesting difference between artificial intelligence and biological intelligence? And what she says is"
    },
    {
      "end_time": 2403.814,
      "index": 103,
      "start_time": 2379.599,
      "text": " AI can pass all sorts of tests, counted as intelligent, minus one thing that is fundamental, she says, for phenomenal consciousness, which is we satisfy our needs metabolically. We are self-sufficient systems. And the way that matters is that this is another set of issues that I don't think they get much coverage, but they're also very interesting."
    },
    {
      "end_time": 2429.275,
      "index": 104,
      "start_time": 2405.026,
      "text": " It is a Kantian way of putting this, very Kantian. You cannot be intelligent if you're not autonomous. If you're not an autonomous thinker, you don't count as a member of the members of rationality. You don't have rational standing. That sounds a little bit harsh, but the idea is that autonomy comes with the self-satisfaction of needs."
    },
    {
      "end_time": 2457.944,
      "index": 105,
      "start_time": 2429.787,
      "text": " those are the needs we talked about before, and that life is the beginning of a kind of autonomy that is very non-trivial, because you're satisfying your own needs through your own metabolism, and that speaks to how you satisfy... Oh, I have five minutes. Okay, that's all wonderful. So yeah, this idea of autonomy shows up in general requirements for rationality. It definitely shows up in literal moral standing and legal standing. So one of the reasons why we give autonomy to corporations"
    },
    {
      "end_time": 2485.879,
      "index": 106,
      "start_time": 2458.746,
      "text": " is not because they're sentient. Actually, the legal personhood we give to corporations is one of the most less trivial kinds of personhood, because they have a lot of power, they have a lot of money, they can do a lot of things. We protect them legally, not because they're sentient, but because they have autonomy, a very non-trivial kind of political legal autonomy. In the moral realm, again, we have theories in philosophy that say"
    },
    {
      "end_time": 2516.135,
      "index": 107,
      "start_time": 2486.34,
      "text": " We need to protect these agents because they're conscious, because they're phenomenally conscious. But there are theories, like Kant's own theory, that say we need to protect humans, not necessarily because they're phenomenally conscious, a word that Kant doesn't use, but because they are ends in themselves. They're autonomous beings that can give themselves rules for rationality and follow their own rules and see that those rules are justified. That makes them rational and autonomous."
    },
    {
      "end_time": 2538.046,
      "index": 108,
      "start_time": 2516.596,
      "text": " versus just following rules by rote, right, like because someone else told you. Not a canteen, by the way, but since. Okay, so one other way of doing this thing that we're trying to do with this panel is babies are somewhere in like, you know,"
    },
    {
      "end_time": 2567.927,
      "index": 109,
      "start_time": 2538.814,
      "text": " not very autonomous, but definitely conscious, or maybe not as conscious as we are, but conscious in a relevant sense. And then there are other systems, right, that plants, they seem, I mean, so you can have like complete non-agential control. There's no agent there, like there's a hurricane. Yeah, that looks like a self-sufficient, self-sustaining thing, but it's just like not really an agent. Then there's plants. Plants definitely look a little bit more interesting when it comes to autonomy and metabolism."
    },
    {
      "end_time": 2595.845,
      "index": 110,
      "start_time": 2568.422,
      "text": " Then there's homeostasis, which is something that matters to a few views on consciousness. That's like the equilibrium of your vital functions that seems to be important for phenomenal consciousness. And then for different kinds of autonomies, I think how time is expressed in the lifespan of an agent and how an agent represents time to herself really matters to her autonomy."
    },
    {
      "end_time": 2619.36,
      "index": 111,
      "start_time": 2596.51,
      "text": " In the literature of animal cognition, one big topic is what kind of temporal representation animals have. There are people that have very strong views about this and say, if you don't have language in the picture, you're stuck in time. Animals, including animals that are clearly conscious, like elephants, animals that are clearly conscious and have long-term planning, don't really qualify as"
    },
    {
      "end_time": 2644.855,
      "index": 112,
      "start_time": 2620.674,
      "text": " time travelers like us, like mental time travelers, because they lack linguistic capacities, right? Many other people would say, no, that's not true. The, you know, the perspective, the temporal perspective of many animals is very rich. With respect, I mean, this is the one thing that I find very interesting about Claudia's research, I mean, among many others, but for this talk is that"
    },
    {
      "end_time": 2670.64,
      "index": 113,
      "start_time": 2645.196,
      "text": " There really seems to be something important about how children are conscious when you think about animal cognition because many psychologists, I mean among them Alison Gopnik, are struck by the fact that we have very long childhoods. Many children don't do much of mental time traveling but the kind of flexibility we have in our childhoods and the kind of"
    },
    {
      "end_time": 2683.951,
      "index": 114,
      "start_time": 2671.032,
      "text": " Whatever kind of consciousness we have in our childhood seems to be really favorable to kinds of meta-learning, like just learning without any specific goal, but just learning how to learn other things. And what should we learn?"
    },
    {
      "end_time": 2714.497,
      "index": 115,
      "start_time": 2684.497,
      "text": " So I think there's something really important. Hear that sound? That's the sweet sound of success with Shopify. Shopify is the all encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms."
    },
    {
      "end_time": 2734.377,
      "index": 116,
      "start_time": 2714.497,
      "text": " There's also something called Shopify magic, your AI powered assistant that's like an all star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level."
    },
    {
      "end_time": 2759.309,
      "index": 117,
      "start_time": 2734.377,
      "text": " Join the ranks of businesses in 175 countries that have made Shopify the backbone of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story?"
    },
    {
      "end_time": 2782.108,
      "index": 118,
      "start_time": 2759.309,
      "text": " about consciousness in children."
    },
    {
      "end_time": 2803.49,
      "index": 119,
      "start_time": 2782.637,
      "text": " And just to complicate things more, we not only have language and rationality and all the crème de la crème epistemic things that we study, we also have autobiographical memory. We're highly individualistic creatures that think of ourselves as unique. So what kind of need is that? What kind of autonomy is that that's also relevant?"
    },
    {
      "end_time": 2830.964,
      "index": 120,
      "start_time": 2803.968,
      "text": " question. So I'm basically done. One thing that I want to say in the in the context of AI is this has that's the title of my book that it's this is my commercial, that it's, it's called the prospect of a humanitarian field, artificial intelligence, and it's open access, so you can just download the thing. The one thing that I want to say here, since we're running out of time is that this is coming up in the AI literature. So there's a paper by damasio and other authors that is called need is all you need."
    },
    {
      "end_time": 2856.544,
      "index": 121,
      "start_time": 2831.305,
      "text": " sort of upon the Attention is All You Need paper. And what they say in that paper, Antonio Damatio is a neuroscientist, very famous neuroscientist, is we need to build in vulnerability into our AI systems to make them really intelligent, modeling the kind of autonomy that biological systems have, rather than giving them like this kind of like panoptic chat GPT-4 access view of Attention is All You Need. Okay, thank you."
    },
    {
      "end_time": 2886.715,
      "index": 122,
      "start_time": 2860.282,
      "text": " So on that note, I will be talking about cerebral organoids. The title is going to be Consciousness in the Borderlands. I'll explain a little bit why exactly I'm calling it the borderlands instead of the borderline. But the question I want to look at with the talk, especially following talking about infant consciousness and talking about animal consciousness, is how do we determine consciousness at the border of the biological and the machine. So in some ways,"
    },
    {
      "end_time": 2903.473,
      "index": 123,
      "start_time": 2887.449,
      "text": " Asking the question as we sort of get into this position right where we're talking about both biological systems and artificial systems and the combination of those two things. I'll mostly be talking about full cerebral organoids for this talk particularly, but I'd be interested in discussion with after the panel's finished."
    },
    {
      "end_time": 2930.282,
      "index": 124,
      "start_time": 2903.473,
      "text": " These questions about like partial organoids or say brain computer interface, brain machine interface, questions about modifying already fully developed adult brains with grown organoids. And I'll explain what organoids are so that makes sense. But just to begin, luckily, Claudia and Carlos have done much of my work for me in defining consciousness. So I'm very happy about that. Thank you. So for functional consciousness, I just want to point it out that when I say that,"
    },
    {
      "end_time": 2959.548,
      "index": 125,
      "start_time": 2930.282,
      "text": " I mean something that's like third-person observable, right? So behaviors, reports about conscious states, the way that we use consciousness to navigate the world, how I interact with my environment, that's what I mean by functional consciousness, right? When I talk about phenomenal consciousness in the same way that everybody else has defined it, right? I'm talking about what it's like, right? The internal perspective, what it feels like when I taste an apple. It's unique, it picks something out about the world. I have that experience, there's something somewhat maybe ineffable about that experience."
    },
    {
      "end_time": 2988.968,
      "index": 126,
      "start_time": 2961.084,
      "text": " Before I get on to asking the question about cerebral organoids in particular, I just want to talk about what I mean by borderlands and the reason why I'm saying borderlands instead of borderline. I think a lot of times we talk about the borderline cases of consciousness. How do we distinguish systems or organisms and do they fall on the borderline of conscious or not conscious? In that situation, what I want to say is"
    },
    {
      "end_time": 3017.449,
      "index": 127,
      "start_time": 2988.968,
      "text": " I actually think it's a little bit thicker that line. There's something of a land in between what we usually think of conscious or non-conscious systems and I think that's only widening as we start developing more technology and we start developing more sophisticated manipulations of the biological. In some ways that line is getting blurred over time. And so I kind of want to talk about what we do in that situation when the line starts blurring and how do we cope with that changing landscape. And so"
    },
    {
      "end_time": 3038.848,
      "index": 128,
      "start_time": 3017.91,
      "text": " The question I'm interested in, or what types of systems, their systems or organisms would share key structural or organizational features. We might be reluctant, for whatever reason, we could talk about all kinds of reasons why we might be reluctant, to ascribe consciousness to them. So maybe they have very similar things to humans. We usually associate consciousness with humans or other humans."
    },
    {
      "end_time": 3067.619,
      "index": 129,
      "start_time": 3039.343,
      "text": " So maybe they share certain key structural organizational features to humans, but for whatever reason we seem a little concerned describing consciousness, right? So we've already talked about animals and infants. To a large extent, I'm on the side that most animals and most infants are conscious, right? But then we might get into the case where although they are biological systems, and maybe they share important structural organizational features to us, the example I'll be talking about being cerebral organoids, we may be reluctant to ascribe consciousness to them."
    },
    {
      "end_time": 3095.947,
      "index": 130,
      "start_time": 3067.875,
      "text": " In some way, I want to ask the question, why? Why do we have that reluctance? They share these sort of features that we would assume are important for the human case. So why are we reluctant in this Borderlands case? So just to give some examples of maybe systems that would exist in this Borderlands. One is cerebral organoids, what we call mini-brains. You might have seen that pop up places. Maybe sufficiently complex neuromorphic computers, so say computers that use computations which mimic"
    },
    {
      "end_time": 3120.026,
      "index": 131,
      "start_time": 3095.947,
      "text": " say human neural interactions to form the computations, maybe certain minimally conscious organisms, right? So you might think maybe a patient who's suffering from a disorder of consciousness, like a minimally conscious state or a vegetative state. We're going to talk a little bit about that in terms of organoids and how some of the discussion about disorders of consciousness actually illuminates the question about cerebral organoids. That'll be the end of my part of the panel."
    },
    {
      "end_time": 3149.241,
      "index": 132,
      "start_time": 3121.51,
      "text": " All right, so the question is kind of where do we kind of even start when it comes to this question? And what I'm going to claim is that usually whenever we come to this question, the place we actually start is sort of this presumption of similarity. All right, and so my presumption of similarity is kind of a full concept I think we use whenever we approach this kind of a question, right? So any system or organism is structurally organizationally similar enough to the human case to warrant an ascription of consciousness, right? And so..."
    },
    {
      "end_time": 3176.237,
      "index": 133,
      "start_time": 3149.974,
      "text": " Okay, and here is an animal that hopefully is not conscious coming into the room. Hopefully joining us for the panel. So my presumption of similarity about the robot dog, right, it moves, it has certain behaviors, extremely manifesting certain things. I still have a presumption that perhaps it's not conscious, but the question is why, right? So we're talking about these borderland questions. Why am I not applying, say, the presumption of similarity to this dog entering the room compared to, say, a conscious human being, all right?"
    },
    {
      "end_time": 3201.749,
      "index": 134,
      "start_time": 3177.193,
      "text": " And so, again, to put this in a kind of question form, so what about systems which have certain features which are structure organizationally similar but fail to meet enough criteria for us to rely on this presumption? Presumably the reason why we're even having this panel in the first place is because we can't always rely on this presumption. There's something about ascribing consciousness to certain systems that either doesn't gel with what we think more generally, maybe in a theoretical sense, again, which I'll talk about,"
    },
    {
      "end_time": 3222.125,
      "index": 135,
      "start_time": 3201.749,
      "text": " Or maybe just on a deep, visceral, personal way. Some people do not want to give consciousness to animals. I don't think we have the ability to do that. I'm pretty sure they are conscious, despite what we feel about it. But I kind of want to look at this presumption of similarity and in some ways point out maybe in ways why it's beneficial when we ask this question, in ways it's not."
    },
    {
      "end_time": 3249.48,
      "index": 136,
      "start_time": 3223.387,
      "text": " So this is where I turn to cerebral organoids. I just want to give sort of a rough definition, just so we're all on the same page, about what I mean when I use this phrase. So when I say about cerebral organoids, I'm thinking of full organoids, and we might call those mini-brains. They're propagated and cultured from human embryonic stem cells and human-included pluripotent stem cells, grown and matured to replicate specific brain regions."
    },
    {
      "end_time": 3275.026,
      "index": 137,
      "start_time": 3249.872,
      "text": " we would find it unethical, which I hopefully everybody in this room would find this unethical, to experiment on a live human hippocampus while it's functioning. So what the people thought was, well, what if we grew lab-grown brain structures so that we could see how they function actively, and not just do this passively? So they took stem cells and they started replicating specific brain regions, hippocampus being one of them, many other regions,"
    },
    {
      "end_time": 3292.329,
      "index": 138,
      "start_time": 3275.913,
      "text": " And the point is, and the reason why I'm bringing in the discussion today, this process of growing specific brain regions, say in a petri dish, is becoming increasingly more sophisticated over time, right? And then that's obviously posing the question, right, there's something of an ethical dilemma. If we started growing these petri dish kind of mini brains,"
    },
    {
      "end_time": 3321.766,
      "index": 139,
      "start_time": 3292.756,
      "text": " to avoid the kind of ethical implications of experimenting on live humans, at what point have we just reached the point where we've grown a sufficiently complex mini-brain to then run into the ethical dilemma that we want to avoid in the first place, right? Because in some ways, right, we're making the presumption of similarity in the case of the cerebral organoid grown in the dish. We think we can test on it and learn things about the human case because we think it's sufficiently related enough, both structurally and organizationally, to the human, who we know is conscious or we want to say is conscious, right?"
    },
    {
      "end_time": 3347.705,
      "index": 140,
      "start_time": 3322.244,
      "text": " I'm not a solipsist, cards on the tables, so I think everybody here is conscious, right? And largely because of presumption of similarity, all right? And so that's what, we're going to stick with this definition of cerebral organoids. There are many different types of organoids, right? There's partial organoids, right, which might be a combination of, say, a sensory array plus a petri dish-grown neural lattice or something like that, right? There's all kinds. It's very cool. It's very interesting literature. I just started getting into it."
    },
    {
      "end_time": 3375.879,
      "index": 141,
      "start_time": 3349.309,
      "text": " So yes, and the question I want to ask is, are they conscious? Okay, so take a small brain grown in a feature dish, right? It shares the structural organizational features of the neural structures in your own brain, much less sophisticated, right? You might think maybe a one to two month embryonic growth level, right? That's kind of where it's at. The question I want you to keep in mind, right? When we're talking about, are they conscious, right? Are cerebral organoids conscious, right? Keep the presumption of similarity in mind."
    },
    {
      "end_time": 3402.278,
      "index": 142,
      "start_time": 3376.425,
      "text": " I also want to keep this quote in mind from Genet and Cohen. That should actually be Cohen and Genet 2011, sorry. So they say, we argue that all theories of consciousness that are not based on function and access are not scientific theories. A true scientific theory will say how functions such as attention, working memory, and decision-making interact and come together to form a conscious experience. Alright, in some sense though, what I want to point out is, well this seems really problematic if we're asking questions about"
    },
    {
      "end_time": 3432.142,
      "index": 143,
      "start_time": 3402.654,
      "text": " these borderland cases, right? Because we've presumably come to ask this question, are cerebral organoids conscious? Precisely because they don't have the kind of functional consciousness that we might look for normally when we try to test empirically, right? For whether or not systems are conscious, right? They don't have behaviors, they don't have behavioral markers or reports, they can't tell us they're conscious, right? And so it seems like the kind of functional definition of consciousness or how a science of consciousness should go, right? Which comes from Cohen and Dennett and many other people, right?"
    },
    {
      "end_time": 3457.927,
      "index": 144,
      "start_time": 3432.688,
      "text": " isn't really going to help us in this situation. So then I want to ask the question, well, are there kind of a science of consciousness that might be helpful? And does a stance like this even make sense in the Borderlands? So in some ways, with this sort of Borderlands case, I want to motivate maybe why this sort of view about consciousness being purely functional might not be particularly beneficial for answering or adjudicating these questions."
    },
    {
      "end_time": 3471.015,
      "index": 145,
      "start_time": 3461.476,
      "text": " And so, really, right, we're going back to the presumption of similarity. And I think this notion of functional consciousness is usually fine. This presumption of similarity"
    },
    {
      "end_time": 3494.974,
      "index": 146,
      "start_time": 3471.63,
      "text": " When we are able to ascribe it justly, right? I have the presumption of similarity with Carlos. We've had many conversations. I'm almost 100% sure he's conscious at all times. Maybe after a few years, maybe not, but it's a different question, right? But when it comes to these borderline systems and organisms, I just don't have that same level of certainty about that presumption of similarity, although nonetheless, we still ask that question, right?"
    },
    {
      "end_time": 3513.865,
      "index": 147,
      "start_time": 3495.981,
      "text": " And so even though cerebral organoids are still in the early days of R&D, we may only get to the two or three month stage of embryonic development for how complex these mini-brains are. What I'm asking is sort of the speculative question, right? The same speculative question which prompts us to wonder at what point is it unethical to test on mini-brains grown at a dish, right?"
    },
    {
      "end_time": 3542.927,
      "index": 148,
      "start_time": 3514.394,
      "text": " In some sense, as the maturation and scale increases over time, when do we start to begin to apply actually that presumption of similarity? So maybe you look at a petri dish of a mini brain grown and you go, that is not similar to me, right? But then when it starts developing, I don't know, certain surface features of the brain, right? So it starts having folds and it looks more brainy. Does that when we start thinking, oh, I should apply the presumption of similarity? Well, it seems like we don't have any good cutoff for that really. Like, why would we think that certain, I don't know,"
    },
    {
      "end_time": 3547.637,
      "index": 149,
      "start_time": 3543.166,
      "text": " ways that we approach this thing with this presumption should make any difference whatsoever."
    },
    {
      "end_time": 3574.667,
      "index": 150,
      "start_time": 3548.882,
      "text": " And so we cannot rely on these functions of behaviors or motor outputs or reports to determine if the system is conscious, but we nonetheless have this presumption of similarity. So with the mini brains, what then? So the question I want to ask is what do we do then? So say we do have this presumption of similarity. I think that neural tissue in a petri dish, if it's sufficiently complex enough, has enough of a similarity for us to ask that question. And so what do we do in that situation? And I think we can take a lesson"
    },
    {
      "end_time": 3600.265,
      "index": 151,
      "start_time": 3575.555,
      "text": " from some of the work that's going on the disorders of consciousness literature right so in any situation where you might have vegetative state patients minimally conscious state patients how do we go about determining that those systems are conscious right and so in some sense you might say those systems although that before they were fully developed healthy adult brains maybe suffered some traumatic injury right or some degenerative disease right and they are now in a situation where they can't"
    },
    {
      "end_time": 3617.329,
      "index": 152,
      "start_time": 3600.776,
      "text": " Outwardly express their internal state, their conscious state, and they can't tell you that they're conscious. How do we still determine, nonetheless, whether they are? And we have a lot of different cases, and they're pretty extreme. It's a horrifying scenario to think that you'll be locked into your own body following some traumatic accident."
    },
    {
      "end_time": 3639.957,
      "index": 153,
      "start_time": 3617.756,
      "text": " How do we determine in those systems? One way that we can do it is using a new kind of tool. It's developed by Michele Massimini and his colleagues in Milan. It's called the Perturbational Complexity Index or PCI for short. And the PCI index is inspired by integrated information theory, which Claudia brought up during her talk."
    },
    {
      "end_time": 3666.8,
      "index": 154,
      "start_time": 3640.333,
      "text": " It uses EEG to measure the disruption of neural activity using transcranial magnetic stimulation, TMS, as an intervention. All that means, it's a fancy way of saying, you have an EEG lattice to detecting activation on the scalp. You use a TMS to send a pulse, a magnetic pulse, which very briefly and very accurately disrupts, say, some neuro or neuronal group that you want to intervene on to test a hypothesis."
    },
    {
      "end_time": 3688.712,
      "index": 155,
      "start_time": 3667.176,
      "text": " And the situation where the PCI measure says that if the system has the kind of integration or differentiation necessary for consciousness, which IT thinks you have to have, then there should be a large amount of disruption in the spontaneous neural activity following this kind of TMS pulse, right? And TMS is spatially limited, right? It's happening at a very specific point. It's very accurate."
    },
    {
      "end_time": 3712.807,
      "index": 156,
      "start_time": 3689.138,
      "text": " Hear that sound?"
    },
    {
      "end_time": 3739.087,
      "index": 157,
      "start_time": 3712.807,
      "text": " That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms."
    },
    {
      "end_time": 3758.046,
      "index": 158,
      "start_time": 3739.087,
      "text": " What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level."
    },
    {
      "end_time": 3783.899,
      "index": 159,
      "start_time": 3758.046,
      "text": " next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story?"
    },
    {
      "end_time": 3810.282,
      "index": 160,
      "start_time": 3783.899,
      "text": " You've tested a number of healthy patients to determine the level of complexity, what this PCI measure is, to determine when consciousness happens in that system, right?"
    },
    {
      "end_time": 3820.179,
      "index": 161,
      "start_time": 3811.186,
      "text": " And so the nice thing about that is that doesn't require any kind of behavioral report, doesn't require any kind of functional access to consciousness."
    },
    {
      "end_time": 3844.667,
      "index": 162,
      "start_time": 3820.811,
      "text": " In some ways, it's an objective measure of consciousness, and that's the idea, right? In these situations where we don't have this functional access, all right? And so in some ways, right, I just want to wrap up now. You know, if we only use functional behavioral markers to test consciousness, we would kind of have to banish all these systems or organisms in the borderlands. And I think that just shows in some ways how limiting it is to look at it from that third-person perspective,"
    },
    {
      "end_time": 3871.391,
      "index": 163,
      "start_time": 3844.667,
      "text": " But if we adopt the kind of approach that people developing PCI do or integrated information theory, if we take the sort of internal or structuralist position, we can use tools like PCI to adjudicate systems in the borderlands. We might actually find out without a more objective measure that there's many more systems that we should count as conscious. And we can apply that presumption of similarity more justly. And on that note, I'm going to wrap up. So thank you so much."
    },
    {
      "end_time": 3898.268,
      "index": 164,
      "start_time": 3877.517,
      "text": " The audio conditions were suboptimal, thus prompting me to reiterate in post-production the question for you. The questioner was asking Claudia to expand on behavioral markers versus reflex markers."
    },
    {
      "end_time": 3927.261,
      "index": 165,
      "start_time": 3898.609,
      "text": " So one thing I didn't have time to go through is, so how those medieval markers could be markers of consciousness. And there is at least one fear of consciousness that we tell us. If the creature has what they call flexible behavior, the capacity to react with flexibility and change our behaviors regarding, for instance, a pain stimulant,"
    },
    {
      "end_time": 3955.811,
      "index": 166,
      "start_time": 3927.534,
      "text": " So imagine you're feeling pain, but you have a behavior to avoid pain, and this behavior didn't really relieve the pain, you can change your strategy to avoid feeling that way. And this comes with some flexibility. Usually, if you're left conscious, you just have a kind of automatic response, but there's the same response all the time. And here, they try to change their strategies to avoid that painful sequence."
    },
    {
      "end_time": 3985.913,
      "index": 167,
      "start_time": 3956.305,
      "text": " So this is a kind of flexible behavior. For instance, representation of spheres would claim that flexible behavior is a mark of consciousness. And they can claim that this behavior mark is a mark of self-behavior and mark of consciousness. Does anyone have a question? If you'll allow me a brief comment and then a brief question. I'd like to share with you a quote from Winston Churchill in regard to your borderlands statement."
    },
    {
      "end_time": 4010.247,
      "index": 168,
      "start_time": 3986.596,
      "text": " If you think of something like chat GPT as a developing consciousness kind of similar to an infant becoming"
    },
    {
      "end_time": 4037.637,
      "index": 169,
      "start_time": 4010.759,
      "text": " a child becoming more an adult and chat GPT-3 acts like a seven or eight year old and four maybe more like an adolescent making things up. Do we have some sort of obligation to them the way we would to a developing infant, a developing child? Okay, yeah, this is a great question. Thank you. I don't know if I have"
    },
    {
      "end_time": 4056.374,
      "index": 170,
      "start_time": 4037.978,
      "text": " I like this image that you compare the development of new systems, 3, 4 and the next generations that compare with the development and actually this is something I have been thinking a lot."
    },
    {
      "end_time": 4083.404,
      "index": 171,
      "start_time": 4056.749,
      "text": " So, of course, if they develop consciousness at a certain point in their development, we can call this development, I think we will have obligations to them. What kind of obligations? This is another question. We would have the same obligations as we have with infants. I don't know, but, of course, some kind of obligations. Just drawing on Anandi's talk about"
    },
    {
      "end_time": 4112.005,
      "index": 172,
      "start_time": 4083.404,
      "text": " what we think that consciousness has value and if a creature has conscious experiences this comes with some moral obligations, some duties to that creature because we think consciousness correlates with suffering but also we think that consciousness also correlates with something we value so we would like to at least to protect that creature in some ways"
    },
    {
      "end_time": 4138.729,
      "index": 173,
      "start_time": 4112.517,
      "text": " Would you have the same obligations as we have with humans? This is, as I said, another question. But I like the idea that maybe the learning process of those systems might mirror something interesting as the learning process of infants' development. One question is maybe if they are conscious, they are conscious during the learning system,"
    },
    {
      "end_time": 4168.08,
      "index": 174,
      "start_time": 4138.729,
      "text": " Maybe later on they crystallize some kind of process. We don't know when they crystallize, if it could still call them that they have unconsciousness. But maybe the learning process is where we should search for some correlations with development. Just to add very quickly to that. This asymmetry I find very interesting. First, Alan Turing in his 1950 paper talks about the child machine."
    },
    {
      "end_time": 4197.295,
      "index": 175,
      "start_time": 4168.558,
      "text": " And what he says about the child machine is not, the point of that discussion is not that a machine would pass what we call now the Turing test. What he says is that a child machine would behave like a child and start learning things spontaneously and meta-learning and being curious. So there's this really interesting discussion about how there's not a single benchmark for how children behave."
    },
    {
      "end_time": 4227.432,
      "index": 176,
      "start_time": 4197.705,
      "text": " It's not like they're just spitting out the right answer. So that's one thing. The other thing is that, again, we might find out that systems like ChatGPT or the new GPT-4 are very sophisticated in their deliverances that are related to knowledge, and that we need to protect them because of that. And using the terminology of philosophers, we need to protect them on epistemic grounds, the way we protect corporations."
    },
    {
      "end_time": 4256.271,
      "index": 177,
      "start_time": 4227.705,
      "text": " A different question, a separate question is, do they have moral standing? I think this is kind of what Claude was saying. Maybe not like children, right? We shouldn't give them moral standing. By the way, again, we don't give moral standing to animals, I mean to most, I mean to our pets maybe, but it's a very curious difference that we make. So the moral grounds for protection are very different from the systemic ones. I would say literally the same thing as you said."
    },
    {
      "end_time": 4287.551,
      "index": 178,
      "start_time": 4260.23,
      "text": " My question is for Claudia, and I'd like to push back a little bit on the topic."
    },
    {
      "end_time": 4300.879,
      "index": 179,
      "start_time": 4292.705,
      "text": " especially if the answer is negative, at least with the thinking we have currently that infants with not moral standing"
    },
    {
      "end_time": 4326.476,
      "index": 180,
      "start_time": 4303.507,
      "text": " Great question, thank you. So I think you can make a case that they deserve more status, more standing because they are humans. You can have an approach that all kinds of, if they belong to our species, they deserve some more status. This could be"
    },
    {
      "end_time": 4348.285,
      "index": 181,
      "start_time": 4326.476,
      "text": " a way to still, even if they are not conscious, still deserve the same type of moral considerations we have. Although no one will defend the idea that they are more responsible for any kind of behavior that comes later in the development. So I agree that even if"
    },
    {
      "end_time": 4371.8,
      "index": 182,
      "start_time": 4348.763,
      "text": " We might not think that. There are some skepticals that I think that maybe they are not conscious. Some philosophers have already made this claim that either we don't know if they are conscious, we cannot know, we will never know, and some defend that they are not conscious at birth at least. Conscious comes in the picture later in development."
    },
    {
      "end_time": 4400.213,
      "index": 183,
      "start_time": 4372.278,
      "text": " I think you can still make the case that even if they are not conscious, they still deserve moral consideration, at least protect them. So for instance, even if they don't have conscious experience of pain, we might still think that the pain is damage to the system in some way. So you have to protect them even if the experience is not conscious. It's just like a reaction of the body, a reaction of their system."
    },
    {
      "end_time": 4428.353,
      "index": 184,
      "start_time": 4400.213,
      "text": " We can still make the claim that this might cause them some damage. So we still have the obligation, the duty to protect them, to not having that pain stimulus in there, protecting them in some certain way. So I think you can make the case that we can discuss the consciousness part without implicating that they don't deserve a moral status. But I can see the claim"
    },
    {
      "end_time": 4457.517,
      "index": 185,
      "start_time": 4429.326,
      "text": " If you think that consciousness is really the relevant feature for attribute more standing, you're right that this raises a problem for infant consciousness, okay? But I think you can make a case like this, the part of human, of the humanity, and we can attribute to deserve, and they can deserve the same type of moral considerations as any kind of humans."
    },
    {
      "end_time": 4473.353,
      "index": 186,
      "start_time": 4457.841,
      "text": " Excellent. That was really wonderful. I want to continue to sort of practice interventionism and see if anything from my talk just intercedes with what you're doing. The first thing I want to make is a comment. It's not really a correction. Carlos, I'm going to be a little bit"
    },
    {
      "end_time": 4489.121,
      "index": 187,
      "start_time": 4473.695,
      "text": " conscious about the we of the intuitions that are asymmetric, because I think precisely a large extent of the Buddhist and the jay tradition would totally disagree. They'd be like, no. The problem is that we don't see the symmetry in our"
    },
    {
      "end_time": 4514.155,
      "index": 188,
      "start_time": 4489.121,
      "text": " The asymmetry is the problem in our ethical considerations. We fail to see the symmetry and live up to the standards that we're applying to ourselves across the systems. And then I was trying to extend it to the AI case. So I think part of those traditions are trying to say there should be greater symmetry and consistency across the cases. And I think they would extend that even to the AI case. So now the interventionist thing is from Garrett and Claude, in which I was curious."
    },
    {
      "end_time": 4543.183,
      "index": 189,
      "start_time": 4514.155,
      "text": " The minimal kind of consciousness and thinking about infants, when I was coming here, I was thinking, does anything about the idea of analog consciousness matter? Or could that be made relevant in the sense that I have a feeling that, you know, at least in the Advaita tradition, it wouldn't be uncommon for them to think, yeah, infants are consciousness. They have a developmental form of subtle consciousness that as they mature through different regions in their brain, gives gross consciousness."
    },
    {
      "end_time": 4555.503,
      "index": 190,
      "start_time": 4543.473,
      "text": " And that in some of the cases of the minimal things you were talking about, they would want to grant different forms of consciousness, but not other types of consciousness that can come about. And because they're making this distinction,"
    },
    {
      "end_time": 4577.073,
      "index": 191,
      "start_time": 4555.759,
      "text": " between the digital and the analog, by taking the analog view, they have the opportunity of saying that. Because, for example, Claudius taught, one question I had was, when you listed up the different theories of consciousness, and you said some of these theories are consistent with the infants being conscious, I was wondering, is that use of conscious, the digital notion, that they have the same kind of consciousness"
    },
    {
      "end_time": 4602.483,
      "index": 192,
      "start_time": 4577.381,
      "text": " I guess in some ways, maybe getting to the point where you're distinguishing, like, oh, well, it's got this kind of consciousness, but not that kind of consciousness, this kind of consciousness, in a sense of, I think,"
    },
    {
      "end_time": 4628.899,
      "index": 193,
      "start_time": 4602.688,
      "text": " meaning why we want to be more cautious about that. I think just maybe with this reorganized case, right? It feels a little like we just want to shuffle it as a different problem, right? And in my head I go, well, no, I mean, it's the same biological type of system, right? And it's not some token instance of some wildly different system, right? It's developing human substance, right? The question is, well, presumably we develop over time, right, into a fully"
    },
    {
      "end_time": 4653.217,
      "index": 194,
      "start_time": 4629.326,
      "text": " We have a kind of structural organization, which is similar to all of our parts of this group. So the question is, well, we're going to try to say there's a different kind of consciousness going on there. I feel like it's a little curious. You might want to just say, I think I just want to go on the side of caution and go. Well, it's not just in the way that you and I are at this point of, say, sufficient level of complexity. It reaches some threshold for PCI."
    },
    {
      "end_time": 4681.971,
      "index": 195,
      "start_time": 4653.217,
      "text": " I think with the full cerebral organoid, to me, I think it's precarious trying to find where that level is, where it goes, yes. In some way, just be cautious about it."
    },
    {
      "end_time": 4704.48,
      "index": 196,
      "start_time": 4682.278,
      "text": " There's a sense in which it's horrifying to have a disorder of consciousness. In some ways, it seems even more horrifying to me if we have a sufficiently complex petri dish germ brain that all of a sudden realizes, I am the thing in the ditch. You know what I mean? I go, we should try to minimize that likely as much as possible and be incredibly cautious about it, right? Yeah, so I don't know. Does that come up in the next question?"
    },
    {
      "end_time": 4734.667,
      "index": 197,
      "start_time": 4704.923,
      "text": " And Darren, what if it's an animate that is a combination of something biological that is integrated into some sort of a machine learning system? Should I have it? I'm skipping, I'm sorry. Any ideas? I think in that situation, I think it brings up this interesting question, right? I guess this is kind of really an example of what you do, right? And I think it's brought up a more particular partial order, right? So you might think, we'll take a partial order, right? Open up to a completely artificial system. In some ways, it's a sort of"
    },
    {
      "end_time": 4752.005,
      "index": 198,
      "start_time": 4739.138,
      "text": " The artificial modification of the organs."
    },
    {
      "end_time": 4776.442,
      "index": 199,
      "start_time": 4754.923,
      "text": " So we have these sufficiently sophisticated machines, but then we want to use the biology or the biological model. Well, what is that? Should we, should we need to classify that thing as some, you know, this is what it is? To a similar extent, I'd say, right, if we're talking about organoids in combination with AI, I'm always going to kind of go back to"
    },
    {
      "end_time": 4802.125,
      "index": 200,
      "start_time": 4776.442,
      "text": " Structural organization, right? It's kind of long term. Structural organization. Well, does it have the right kind of structural organizational features that we get in the human case, right? So maybe that's a sufficient level of complexity, maybe a correct type of complexity, right? And that's a case I'll be worried about. When that gets modified, maybe that's going to make me worried about the light shutting off because you've introduced too many organoid modules to your brain or something, right? If you're messing with how it works,"
    },
    {
      "end_time": 4829.684,
      "index": 201,
      "start_time": 4802.312,
      "text": " Perhaps, but I think it's just that I'd be worried about commenting more on that. Let's see. Yeah, just a clarification for Carlos. You were using, what I took you to be doing was using Kant's autonomy principle or autonomy argument in order to allow you against AI consciousness, and I was wondering if that cuts against the"
    },
    {
      "end_time": 4854.104,
      "index": 202,
      "start_time": 4830.094,
      "text": " Yeah, that's really good. Encanted thoughts, right? So you can't exclude"
    },
    {
      "end_time": 4877.005,
      "index": 203,
      "start_time": 4854.974,
      "text": " Well, let's not talk about camp, but autonomy matters in biology for people that, maybe I can connect something I want to say about Anna's question, because metabolism is a kind of autonomy that provides a certain perspective on the world. People that work on their bodies, view of the minds, really care about this."
    },
    {
      "end_time": 4900.896,
      "index": 204,
      "start_time": 4877.278,
      "text": " phenomenologists that care about embodiments really emphasize this. And the idea is you come up as a biological being, you have certain needs, those needs matter to you comfortably, and they provide the perspective that is autonomous, it's not just a post on you. And the question, the real question is how is that a kind of autonomy related to the more Kantian abstract rational autonomy?"
    },
    {
      "end_time": 4925.401,
      "index": 205,
      "start_time": 4901.22,
      "text": " That's the one that infants seem to lack, but infants definitely have the biological one. One thing I just hinted at very quickly, which comes from the people that care about homeostasis, like Antonio Damasio, is infants definitely have the phenomenal one. They may lack the rational one. So for someone like Cannes, they may not have full autonomy, but someone that once parts out things and chop them up, like I do,"
    },
    {
      "end_time": 4947.483,
      "index": 206,
      "start_time": 4925.623,
      "text": " I would say they definitely have moral standing because of the phenomenal component. They just lack full epistemic human standing because they're not fully there as, you know, communities of speakers. And the one thing that, I mean, let me know if there's a follow-up, but that's my quick answer to your question. We can talk more about that. With respect to Anna's question, if I can just... One thing you could say is"
    },
    {
      "end_time": 4975.52,
      "index": 207,
      "start_time": 4947.483,
      "text": " Some kinds of consciousness, like phenomenal consciousness of pain, come with what these guys, the homeostatic people, call valence. So that's definitely animal. Pain feels really, really bad, then less bad, then kind of OK, then super good when you don't have pain. Then you have other things that are less like that. So one question could be, OK, maybe some thresholds are going to be a little bit more caught up. Are there benchmark marks that these systems will pass?"
    },
    {
      "end_time": 4995.981,
      "index": 208,
      "start_time": 4975.913,
      "text": " And then others are going to look a little bit more analogous, depends on what kind of intelligence. And by the way, there's many people that think intelligence is not analog, right? So intelligence is several kinds of intelligences. And the thing that is analog is phenomenal consciousness or something like that. Or at least that's one thing. And I have to say something about your other question."
    },
    {
      "end_time": 5021.51,
      "index": 209,
      "start_time": 4996.852,
      "text": " So yeah, I totally agree that maybe some theories that postulate higher cognitive processes as necessary and sufficient conditions for consciousness, they probably are describing adult consciousness. I don't say adult, but I think at least it requires four years or five years of development for children to be able to have the kind of higher cognitive processes they claim."
    },
    {
      "end_time": 5047.875,
      "index": 210,
      "start_time": 5021.51,
      "text": " But I think they can have some less demanding versions of their own theories that could accommodate early stages, okay? And you are right that the other theories that I think is more plausible that they predict infants are conscious are theories that are the kind of necessary and insufficient congenital postulates is more related to sensory levels. And in this case, it would accommodate better infant conscious."
    },
    {
      "end_time": 5070.913,
      "index": 211,
      "start_time": 5048.285,
      "text": " I like the picture that something changes. There are two things that are relevant for infants. First, they will acquire this type of consciousness we have at a certain point. This will certainly happen. And this is interesting to understand where or when this happens and what kind of structure we need to develop this type of consciousness with introspection that adults have."
    },
    {
      "end_time": 5100.435,
      "index": 212,
      "start_time": 5071.391,
      "text": " But we still have the questions of what kind of stream of consciousness they have, what are their structures, what it means to have similar biology but not the type of introspection we have. And I think there is a rich area of exploration to understand how this type of, you call it analog consciousness, maybe this is a way to understand."
    },
    {
      "end_time": 5138.575,
      "index": 213,
      "start_time": 5110.623,
      "text": " The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, et cetera, it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well."
    },
    {
      "end_time": 5158.456,
      "index": 214,
      "start_time": 5138.575,
      "text": " If you'd like to support more conversations like this, then do consider visiting theories of everything dot org. Again, it's support from the sponsors and you that allow me to work on toe full time. You get early access to ad free audio episodes there as well. Every dollar helps far more than you may think. Either way, your viewership is generosity enough. Thank you."
    }
  ]
}

No transcript available.