Audio Player
Starting at:
ℹ️ Timestamps visible: Timestamps may be inaccurate if the MP3 has dynamically injected ads. Hide timestamps.
Transcript
Enhanced with Timestamps
88 sentences
6,149 words
Method: api-polled
Transcription time: 38m 22s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
Hola, Miami! When's the last time you've been to Burlington? We've updated, organized, and added fresh fashion. See for yourself Friday, November 14th to Sunday, November 16th at our Big Deal event. You can enter for a chance to win free wawa gas for a year, plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow! No purchase necessary. Visit BigDealEvent.com for more details.
It depends on the hard problem with consciousness. Why we like to anthropomize large language models, chatbots in particular, is because they communicate to us linguistically. In order to have empathy, you need to care about something. And it's not really clear to me at the moment whether our chatbots have the capability to care about anything.
Dr. Steven Gubka is a postdoctoral associate in Ethics of Technology at the Humanities Research Center at Rice University. His work analyzes the philosophy of emotions as well as the ethics and epistemology of technology. In this short talk, Dr. Gubka discusses the metaphors we use to conceptualize LLMs, such as chat GPT, as well as the criteria for determining whether LLMs are reliable sources of information.
This talk was given at MindFest, put on by the Center for the Future Mind, which is spearheaded by Professor of Philosophy, Susan Schneider. It's a conference that's annually held, where they merge artificial intelligence and consciousness studies, and held at Florida Atlantic University. The links to all of these will be in the description. There's also a playlist here for MindFest. Again, that's that conference, Merging AI and Consciousness. There are previous talks from people like Scott Aaronson, David Chalmers, Stuart Hammeroff, Sarah Walker, Stephen Wolfram, and Ben Gortzel.
My name is Kurt Jaimungal and today we have a special treat because usually theories of everything is a podcast. What's ordinarily done on this channel is I use my background in mathematical physics and I analyze various theories of everything from that perspective and analytical one but as well as a philosophical one discerning well what's consciousness's relationship to fundamental reality? What is reality?
are the laws as they exist even the laws and should they be mathematical but instead i was invited down to film these talks and bring them to you courtesy of the center for the future mind enjoy this talk from mind fest okay so now steven um is that wonderful post-doc of mine uh absolutely
Hello.
So it seems gratuitous almost to say anything more about chatbots than what's already been said, but I will try to add to the conversation just to stimulate what follows afterwards. What I want to do in this brief presentation is discuss three narratives that I've collected through talking to my friends, essentially, about large language models. One of the joys of being a philosopher is that one way you can do research is you can
pump the folk, as it were, for their intuitions about what they think about philosophical cases, ethical dilemmas, and in this case, chat bots. So the first narrative that I encountered when I started thinking about this and asking my friends about how they use chat bots like ChatDBT is the issue of mistakes, which has already come up in Scott's talk.
And the common thing that gets, the term that gets used is hallucination. And I think on this hallucination metaphor, at least the way that I initially understood it, although maybe we've made some progress since then in this conference, is that large language models make mistakes seemingly at random, as if they are capable of perception and some kind of inexplicable error happens to them. And then they report what they seem to see in this case. Something that kind of got developed, I think, in conversation or in Q&A was this idea that
Maybe we could think about this sort of mistake as confabulation rather than hallucination. And indeed, this is what some of my friends have said about how large language models act when they ask questions. The kinds of mistakes that get made, such as adding extraneous details, embellishing, largely seem consistent with the information given by the large language model, but nonetheless go beyond in some way that ends up being extraneous and false. And this is much like how we construct narratives about ourselves
When we're asked about our behavior, right? So maybe I don't know why I have a particular annoying habit. But if you ask me about it, I can probably come up with some rationalization. Some story I could tell you about why I do the thing that I do. And similarly, especially when you try to ask a chat bot about its sources. So I remember in particular wondering if I could ask Chat DP3 in particular detailed questions about one of Ursula Le Guin's books.
And it started just making up chapter titles, extraneous plot details, all these things that are maybe consistent with the surface level summary of a book, but nonetheless missed the mark. Now, ultimately, I think that this way of thinking about chatbots as either their mistakes or hallucinations or confabulations is a false dichotomy. And I think that largely the reason why it's a false dichotomy and one that we shouldn't accept either option for
is because these metaphors end up being objectionably anthropomorphic and reductive, right? In thinking that chat bots are the kinds of things that can hallucinate or confabulate, we're thinking about them like they're human beings, like they're agents with goals, that they have some kind of purpose, maybe to tell us the truth, maybe that's what they'll report to us if we ask them. And we're assuming that the reason that they make mistakes, that there's just one good explanation for all the mistakes that they make,
And there couldn't be multiple types of mistakes, multiple types of explanations. So if we're going to, I think, kick away this idea, I think what we need to recognize is that the way that ordinary people interact with chatbots and the way that we're tempted to, perhaps because of how they're designed, is we think about what they're doing as giving us testimony, as if they believe things and they're reporting those beliefs to us. But independent of some serious philosophy of mind,
I don't think we should think that chat bots have beliefs, that they are reporting things that they think to us. And I think to the extent that we can get away from this thinking, the extent that the ordinary person can get away from this thinking, we might be able to improve our ability to think critically, reflectively about this technology. So I instead suggest or wonder whether or not we could, instead of approaching it like a testifier, we could approach large language models like potential knowledge tools.
perhaps related to Michael's conception of an epistemic tool. This is a conception that Garrett with his co-author Carlos developed as thinking about a distinction between, as sort of prefigured by your comments earlier today, distinction between technology that is an epistemic agent that has beliefs, potentially has knowledge, versus a tool that we use to gather information to try to form our own knowledge.
So the third narrative that I think is maybe engineered primarily by the people who want to promote the use of these things, especially as a component of search engines, is this idea that these things are getting better. They're improving. More data is being added to them. Now that they have access to the internet, they're more reliable. And I don't disagree that more data and more training couldn't make large language models more reliable.
But that's not a necessary consequence of additional data. So some of these tests of the abilities of large language models that show them scoring very well, in fact, also sometimes demonstrates a degradation of abilities in other areas and sometimes the addition of bias that was unexpected. So these kind of emergent properties or abilities of large language models, these like unpredicted but sharp differences in behavior,
are not always improvements. So they might be sort of like, oh, a jump in cognitive ability, the ability to say, instead of getting a B in calculus and A in calculus in a college course, may nonetheless be accompanied by unexpected differences in behavior that aren't desirable, unexpected inaccuracies. Now, more to the point though, ordinary users aren't going to know whether large language models that they use undergo changes at all. And to the extent that they do, they might assume that those changes
positively improve its reliability. So one of these narratives here that I think is worth taking a critical look at is this idea that these changes are necessarily improvements and that once we've achieved sort of a state where we can trust a large language model, that we don't run into a further problem where we can then ask, should we trust it in the future once it's been updated, once it's undergone additional training,
In conversation with Susan, we've called this problem the problem of diaconic justification. So the idea is that even if you had evidence at one point that a large language model was reliable, because of their unpredictability in these ways, that trust maybe shouldn't stick around after an update. You would have to then reestablish its reliability through whatever mechanism you initially established it.
So I just want to close with a couple of things that I've been thinking about. So part of this story that I've been telling you about how my friends in particular, huge large language models, how they think about them, relies on this idea that they may be trusting them for the wrong reasons, that they're anthropomorphizing them. Now, I think one of the reasons that strikes me as most obvious about why we like to anthropomorphize large language models, chatbots in particular, is because they communicate to us linguistically.
And I'm curious if there are other features of large language models that inclines us to trust them, and if they could be designed without those features so that we don't trust them for the wrong reasons. Another question I have is whether or not there are better metaphors or analogies for when chatbots make mistakes. So if you agree with me that there's something objectionable about saying that ChatGP3 hallucinates or confabulates, and maybe you don't agree with me on that,
Are there better ways to talk about the kinds of mistakes that these chatbots make? And finally, I'm curious, what kind of forthcoming evidence would we, in fact, need to trust a particular large language model that is reliable enough to trust? Thank you.
This episode is brought to you by State Farm. Listening to this podcast? Smart move. Being financially savvy? Smart move. Another smart move? Having State Farm help you create a competitive price when you choose to bundle home and auto. Bundling. Just another way to save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state.
Thank you, Stephen. Those were wonderful questions. And let me just ask the people on the panel if they have any ideas here. Mark. So this idea of anthropomorphizing LLMs I think is really interesting. I'm not sure it's possible to decouple it from that though, right? Because if you talk to something that talks like a human, it kind of like you naturally infer human characteristics to it.
I mean, if you look at some of the early use cases of LLMs, or like the App Replica, which came out even before GPT-3 was available, I believe, and just all the instances of the new sort of fine-tuned chatbots that people have created using GPT, they're all, they all simulate people, right? They simulate, you know, like romantic partners. They simulate, you know, various other things that people want to talk to, right? People, right? So it's, I don't know that you can really escape that because
language is such a human thing that if you communicate with language with something else, like, you're naturally going to do that. Or another idea, another example, right? The Google engineer who was convinced that the chat bot was, had consciousness, right? Was absolutely convinced about that. Like, those kinds of things made me think that it's not really possible to decouple that, you know, humanization of these things from the tech itself.
Yeah, that's absolutely, I agree with you Mark and I also would like to put something a little further in that, you know, these chatbots, they also change the way that humans speak to them and the way that they structure their language based on what those chatbots understand to some extent. And so if people are changing the way that they talk,
to chat bots because the chat bots can understand speech in certain ways and they don't understand it in other ways. For example, giving commands, et cetera. I think that it also changes the way that people relate to them. So it's a dual way. It's not just, you know, the chat bots to the people, but it's also how the people are changing the way that they are communicating and it's also a commodification of human language, which is something that I think is another thing that we have to be looking at.
Yeah, no, I completely agree that once you build something whose purpose is to interact like a human, it would be bizarre not to anthropomorphize it, right? I mean, that's the whole point, that you can interact with it using the language that you know. But if anyone is worried about that and wants to resist that, then I guess following from my talk, I can strongly suggest to them, try submitting the same prompt over and over.
right? Try, you know, rewinding it, right? And saying that, okay, yes, I can do this very, very non-human-like thing with it, right? I can just see the whole branching tree of possibilities that it could have given other than the one that it did. Now, I wanted to respond to something and, you know, if this was like a critique of what I thought you were saying,
People say that the language models are getting better. Just to be clear, I would not say that there is any a priori reason why we knew that GPT-4 had to be better than GPT-3, nor would I say that there are no examples where GPT-3 is better than GPT-4. I would just say that if you try them both out on a range of things, you will see that GPT-4 plainly obviously is better.
Yeah, I don't contest that at all. As someone who's used both of them. I was really struck upon reflection hearing you describe this sort of like shattering the illusion moment, right? When you rewind. So I've done this with art image generation, right? And it gets to the point where I just, I don't even know what I'm looking at. I just keep like trying out the same prompt and seeing what it'll do again. I wonder if maybe something like this, like you're suggesting might be effective is sort of like, oh yeah, this is just something that's following instructions. It's not, yeah.
Just talking about trust and perhaps part of the problem is that we look at large language models today as, you know, expecting that they're this omnipotent sort of all-knowing model and, you know, ask you such broad questions and expect it to be, you know, 100% every time and, you know, is it perhaps what we're going to see is that there's going to be more focus on domain-specific models that I think are already showing that they are more predictable and more deterministic and that
Maybe eventually, you know, actually you can see it now I think with the mixed role models, right, which are basically a mix of models that ultimately it's, you know, it's going to be a collection of domain specific models and then just selecting the right model. And I think, but that seems like it's, that's probably going to be the path to, you know, developing trust by just having kind of more focused models as opposed to these generic omnipotent models.
Yeah, I think that that does sound promising. So thinking about the applications in the medical field, right? If you had a model that was just trained on information about diagnosis, for example, that could be potentially like a very useful tool to quickly generate based on the symptoms. Here are a bunch of likely alternatives that I think like someone who is trained in medicine could use very reflectively using Michael's sort of understanding here about the limitations about what this thing can do. So I agree with you. I think one way
I have a question for Richard and anybody in the audience about that to follow up on that.
How feasible is it to create effective domain-specific models in areas like medicine, for example, or autonomous vehicle development when the one percent situation could arise that actually seems to require troubleshooting from something like an AGI? Like, how would that work? And maybe I'm wrong about... I mean, I think, first of all, I think deep learning demonstrated that with their
the protein folding models, right? That they could be very effective in a very, you know, and get pretty amazing results from the very domain specific problem area, right? But I think it is the main specific, like because self-driving cars, clearly that is a problematic area and I think, you know, clearly getting the data to train with is, again, there. So, I think you're going to see certain domains advancing quicker than others, right?
And I think that's just a reality. And I know also talking from industry private sector, clearly we're not going to see the investment in the private sector. It's going to start slowing down if we can't get that deterministic, if we can't get results in a sort of domain specific environment. So I think it's an inevitability that we will see that. But of course,
Tesla shows us that full self-driving is a hard problem and the hard problems are not going to be solved overnight with this. This is actually a little bit more kind of what Mark was saying earlier a while ago. But maybe it's kind of on the questions that are going on here so maybe this is for Stephen as well.
You know, when it comes to this issue, I kind of feel like we don't need to reinvent the wheel, right? I mean, didn't Weissenbaum have this exact worry after Eliza? I mean, there's a reason why right after Eliza happened, he wasn't concerned with the capabilities of AI or the capabilities of computer and technology progressing. He was worried about the way that we react to it when we come across it, right? He didn't look at it and go, Oh my God, computers are going to take over the world. He went, Oh my God, look at all these humans who walked up, asked, got a very quick script response and went, Oh my gosh, there's a person behind this. There's a mind.
In some ways, I kind of think like, yeah, maybe it's much more capable than ELISA, but it feels like it's the same problem, right? I mean, am I crazy about this? Right, in that way. But yeah. I don't know what you think about that. Like, in the sense, do you think this is qualitatively a different problem than that? Like, this anthropomorphizing of technology or AI, it seems very similar. Yes, it's gotten better, it's gotten fancier, but yeah, I don't know.
Yeah, I'm very sensitive to this like, is this a new problem or old problem question? I mean, Eliza came to mind immediately as this conversation started for me as sort of thinking about like, yeah, it doesn't take much for us to get convinced that we're talking to another human being or this thing we're interacting with is relevant, like a human being that we can think about it that way. I do wonder if this is like that, but much, much worse. And I'm also wondering if thinking about
This experience has got illustrated about the illusion being broken. If we could have more of those moments built into the experience of using this thing, would that be worth it or would that somehow cripple its function in some interesting way? Or could it wake us up to the fact that we shouldn't be anthropomorphizing it? Trust is a tricky issue because you think there's a right answer.
And life is complex, life is ambiguous, life is sloppy. There's social dilemmas, there's trolley problems, there's more dilemmas and so forth. My fear is that if we trust an AI because it seems so expert, just like we trust a real expert, we assume that's the answer and life is ambiguous. There's many answers and it all depends upon the values you build into the system and so forth. And I think we might lose some of that if we have such an expert system like an AI that we don't doubt its credibility.
There's this issue that social epistemologists are worried about called epistemic trespassing, where someone who is knowledgeable in one area enters another domain and says a bunch of stuff, even though you know their PhD was in mathematics, now suddenly they're saying things about philosophy. Now, saying that, I'm not saying no one should investigate fields other than their own or do interdisciplinary work, but the worry here is if we start thinking about, oh, this person or this chatbot is credible, after all, they say all these true things about mathematics,
Oh, but then we can ask it these harder, more difficult questions and thinking we can trust it because it showed off its expertise in one area. So we could be under this kind of illusion then that because it gets the simple problems right, that the more difficult problems are also ones that it would get right. Yeah, so just a couple responses. So as far as I could tell, a central reason why the academic sort of AI and cognitive science and linguistics communities
were like very, very slow to sort of react to GPT in, you know, 2020 or 2021, you know, is that they were all inoculated by Eliza, right? They had all learned the lesson from Eliza and from, you know, the Lubner Prize that came after it, that if you see something that looks like, you know, a superficially impressive chat bot, right, there is actually nothing interesting going on under the hood.
And the only questions here are human psychology questions. Why would people be so stupid as to think that there's something under the hood when there isn't? And it was so strong that when there actually was something under the hood, they just could not see it in front of their faces. So that was the first thing. But the second thing is I'm sort of amused when people say, well, why has it been so much harder to build a self-driving car than to build a chat bot? And it seems like it's easier.
The answer to that, in some sense, is that it's not harder at all. In fact, compared to making an 80% accurate chat bot, it's quite easier to make an 80% accurate driver. It's just that you now want 99.999% accuracy, and that's the only hard part. There's also this, where do you set the threshold? I would set it, personally, as
where it's about as safe or safer than a human driver but it looks like in practice people might not allow it until it's like a hundred or a thousand times safer.
Okay, so I am a psychologist. I actually feel like I just landed in Mars the last two days with all the language and all you're doing. Oh, am I not talking to the mic? Okay. Can you hear me? And I'm the Vice Chair of the Board of Hospice and Quality Control Committee. Okay. So my background is not only psychology, it's a lot of medicine and it's a lot of corporate medicine and it's insurance companies. All right.
Susan told me like three days before we were going to be on the panel. Now that's the worst thing for me because I am very OCD and I want to have a whole presentation already but I couldn't give you a presentation because I don't really... This is like a new learning experience. So I had articles, AI is changing every aspect of psychology.
Then there was another article I got, how AI could transform the future of psychology profession, and it goes on and on. I'm not so sure. My doubts are I think it can be used for specific things, let's say like somebody is in recovery for addiction and wants to go to pick up a bottle of vodka,
They talk to their, I'm calling the chat box, Rose. I love flowers. And they talk to Rose and Rose gives them, you know, no, Rose doesn't give them vodka. Rose says, go to a meeting. You know, there's a whole specifics, call your coach, call your sponsor, on and on. So I see specific avenues for this,
For your side, all of your side, because I'm kind of like the alienated, isolated person here. What I can't see is, and I want you to answer, can you see, I think the most important quality, which I was saying, the Martin Buber, I do know a little philosophy, I and thou, how do you see developing or can it develop or can it not have empathy
And what do you see empathy as? And do you think that's a possibility? And I'm opening it up to everybody. That's my question. Well, I think that's a fantastic question. It's a question that nobody really here is going to know the answer. But it does seem to me like right now, I'm dubious. When we ask about what chatbots can do,
I am open to the idea, as I said, that they might have beliefs, they might have quite complicated mental states. And is it possible that they have emotions? We get into a complicated question about what those are. But certainly something like, I'm really hoping that they don't have emotions, like subtle ones, like resentment. That would be not good.
Do not want resentment in my chat bot. But empathy, it would be nice if it had that. But at the moment, I think in order to have empathy, you need to care about something. And it's not really clear to me at the moment whether our chat bots have the capability to care about anything.
But that sort of depends on what we mean by care. But if they don't, then I don't think they have empathy, because I think empathy requires having a caring attitude, at least possibly, towards the person that you're in that I-Thou relationship with. Anyway. Yeah, of course. I actually think the harder question isn't, because I don't think a chat bot by its very nature can ever have empathy, but I don't think that's the question. I think the question is whether or not people can perceive it as having empathy, which I think is
is considerably possible. I know lots of people that fake empathy. I mean, I think, and just to say you're not alone at this table, I'm a social scientist, not a computer scientist, so I join you here. I actually think the bigger danger here is social community capital, which is to say technology increasingly isolates us and
This great book is about Robert Putnam's bullying alone, the value of television is isolating, but imagine if I perceive a chat box as a person and that becomes my interaction, that satisfies my social need. I chat with the chat box, it chats back,
It pretends it has empathy. I perceive it as having empathy, whether it does or not. And then we create this sort of universe where I don't have to engage, interact or participate in the greater society around me. That I think is highly dangerous. And I think that, you know, we were already on our way before we had LLM, right? But now people can be like, I don't need to talk to anyone. I talked to Rose and Rose tells me all the things I want to know and feels bad about my day, right?
That is isolating and it's bad for society. It's bad for democracy. I mean, it's not functional and that's a problem for me. So whether it feels it or not, I don't know, but I can tell you if I perceive it does, then that's dangerous.
These questions that people like to ask, does an AI have empathy? Does it have intentions? Does it have beliefs? You have to separate out the metaphysical part, the part that depends on the heart problem of consciousness or whatever, you know, what is his internal experience from the empirical part.
from the part that could be measured in principle. And so I think that's what you were getting at. But I think one of the surprises if you spend some time with GDT is that it's much better at getting emotional nuance described. You know, rephrase this email to be a little bit less ass of a grasshopper.
Whether the use of language problems for therapeutic reasons or for people who suffer from social anxiety to get practice. How well it works is an empirical question. We're two critical tribes that will tell us more about this. I don't think that we should arrogate to basically we can guess the answers to those questions from our own cheeks.
So Kevin, I totally agree with you. I really think the key question is how humans relate to it and perceive these affective states within the system itself. But Miriam, I think you raised a really interesting question as to whether or not an LLM can actually have an affective state. So if it can demonstrate empathy, I think it would require some emotional state, correct? But I guess what I'm getting at is humans have multiple dimensions in which we cogitate. We can think about things in terms of
our five senses in how we perceive the universe we can you know not just language I mean language is certainly one medium upon which we think about things but like chat pots only have that one medium so they lack I don't know that you can necessarily encode some effective state simply in that language medium so I don't know that an LLM in its current state could potentially have an affective state that would be required for true empathy
Yes, sorry for going far away from empathy, but I think maybe what I'm saying can apply to this. So I just want to go back to the idea of attributing agency and what kind of underperformation we might be doing. So I think a distinction that there is a recent debate in developmental psychology trying to think what kind of cognitive systems those systems are.
And so they make a distinction between learning from imitation and learning from exploration. I think it's a framework that Alison Coppenick and her team has been using. And I think maybe one idea is so those systems are probably learning through imitation. They kind of imitating a process that they learn from other agents, the agents that they find on internet, their information.
But they're not doing the kind of exploration that a young child, a small child would have, like a kind of a truth-seeking cognitive system that go to explore the world and learning things from the world. So maybe what we might be seeing is that
the system I just like learning through imitation what is empathy is and learning how to respond in a way or to behave in an empathic way through this learning system. So I think this framework might be interesting because then we have to reframe the idea of hallucination. The hallucination is more a metaphor that is related to a truth seeking cognitive system and not an imitation learning system.
Yeah, I just want to add to this framework. GBT assures me that it does not experience everything. All right, that was awesome. And Claudia, thank you for that distinction. That really helped. And Marion, everybody, that was wonderful. Stephen, Mark. So now, cookies. We actually probably need shots after talking about all this national security stuff.
Firstly, thank you for watching, thank you for listening. There's now a website, curtjymungle.org, and that has a mailing list. The reason being that large platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever they like.
That's just part of the terms of service. Now, a direct mailing list ensures that I have an untrammeled communication with you. Plus, soon I'll be releasing a one-page PDF of my top 10 toes. It's not as Quentin Tarantino as it sounds like. Secondly, if you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself
Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube.
which in turn greatly aids the distribution on YouTube. Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, they disagree respectfully about theories and build as a community our own toe. Links to both are in the description. Fourthly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts.
I also read in the comments
There's also PayPal. There's also crypto. There's also just joining on YouTube. Again, keep in mind it's support from the sponsors and you that allow me to work on toe full time. You also get early access to ad free episodes, whether it's audio or video. It's audio in the case of Patreon video in the case of YouTube. For instance, this episode that you're listening to right now was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough. Thank you so much.
Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull?
Jokes aside, Verizon has the most ways to save on phones and plans where you can get a single line with everything you need. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 5218,
"audio_duration_seconds": 2301.8,
"completed_at": "2025-11-30T23:35:59Z",
"segments": [
{
"end_time": 20.896,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
},
{
"end_time": 36.067,
"index": 1,
"start_time": 20.896,
"text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
},
{
"end_time": 64.514,
"index": 2,
"start_time": 36.34,
"text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
},
{
"end_time": 93.422,
"index": 3,
"start_time": 66.852,
"text": " Hola, Miami! When's the last time you've been to Burlington? We've updated, organized, and added fresh fashion. See for yourself Friday, November 14th to Sunday, November 16th at our Big Deal event. You can enter for a chance to win free wawa gas for a year, plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow! No purchase necessary. Visit BigDealEvent.com for more details."
},
{
"end_time": 115.009,
"index": 4,
"start_time": 94.326,
"text": " It depends on the hard problem with consciousness. Why we like to anthropomize large language models, chatbots in particular, is because they communicate to us linguistically. In order to have empathy, you need to care about something. And it's not really clear to me at the moment whether our chatbots have the capability to care about anything."
},
{
"end_time": 143.2,
"index": 5,
"start_time": 117.142,
"text": " Dr. Steven Gubka is a postdoctoral associate in Ethics of Technology at the Humanities Research Center at Rice University. His work analyzes the philosophy of emotions as well as the ethics and epistemology of technology. In this short talk, Dr. Gubka discusses the metaphors we use to conceptualize LLMs, such as chat GPT, as well as the criteria for determining whether LLMs are reliable sources of information."
},
{
"end_time": 173.2,
"index": 6,
"start_time": 143.2,
"text": " This talk was given at MindFest, put on by the Center for the Future Mind, which is spearheaded by Professor of Philosophy, Susan Schneider. It's a conference that's annually held, where they merge artificial intelligence and consciousness studies, and held at Florida Atlantic University. The links to all of these will be in the description. There's also a playlist here for MindFest. Again, that's that conference, Merging AI and Consciousness. There are previous talks from people like Scott Aaronson, David Chalmers, Stuart Hammeroff, Sarah Walker, Stephen Wolfram, and Ben Gortzel."
},
{
"end_time": 194.224,
"index": 7,
"start_time": 173.2,
"text": " My name is Kurt Jaimungal and today we have a special treat because usually theories of everything is a podcast. What's ordinarily done on this channel is I use my background in mathematical physics and I analyze various theories of everything from that perspective and analytical one but as well as a philosophical one discerning well what's consciousness's relationship to fundamental reality? What is reality?"
},
{
"end_time": 214.224,
"index": 8,
"start_time": 194.224,
"text": " are the laws as they exist even the laws and should they be mathematical but instead i was invited down to film these talks and bring them to you courtesy of the center for the future mind enjoy this talk from mind fest okay so now steven um is that wonderful post-doc of mine uh absolutely"
},
{
"end_time": 237.773,
"index": 9,
"start_time": 215.128,
"text": " Hello."
},
{
"end_time": 269.377,
"index": 10,
"start_time": 240.538,
"text": " So it seems gratuitous almost to say anything more about chatbots than what's already been said, but I will try to add to the conversation just to stimulate what follows afterwards. What I want to do in this brief presentation is discuss three narratives that I've collected through talking to my friends, essentially, about large language models. One of the joys of being a philosopher is that one way you can do research is you can"
},
{
"end_time": 295.657,
"index": 11,
"start_time": 269.753,
"text": " pump the folk, as it were, for their intuitions about what they think about philosophical cases, ethical dilemmas, and in this case, chat bots. So the first narrative that I encountered when I started thinking about this and asking my friends about how they use chat bots like ChatDBT is the issue of mistakes, which has already come up in Scott's talk."
},
{
"end_time": 325.418,
"index": 12,
"start_time": 296.408,
"text": " And the common thing that gets, the term that gets used is hallucination. And I think on this hallucination metaphor, at least the way that I initially understood it, although maybe we've made some progress since then in this conference, is that large language models make mistakes seemingly at random, as if they are capable of perception and some kind of inexplicable error happens to them. And then they report what they seem to see in this case. Something that kind of got developed, I think, in conversation or in Q&A was this idea that"
},
{
"end_time": 355.06,
"index": 13,
"start_time": 325.64,
"text": " Maybe we could think about this sort of mistake as confabulation rather than hallucination. And indeed, this is what some of my friends have said about how large language models act when they ask questions. The kinds of mistakes that get made, such as adding extraneous details, embellishing, largely seem consistent with the information given by the large language model, but nonetheless go beyond in some way that ends up being extraneous and false. And this is much like how we construct narratives about ourselves"
},
{
"end_time": 381.852,
"index": 14,
"start_time": 355.452,
"text": " When we're asked about our behavior, right? So maybe I don't know why I have a particular annoying habit. But if you ask me about it, I can probably come up with some rationalization. Some story I could tell you about why I do the thing that I do. And similarly, especially when you try to ask a chat bot about its sources. So I remember in particular wondering if I could ask Chat DP3 in particular detailed questions about one of Ursula Le Guin's books."
},
{
"end_time": 410.691,
"index": 15,
"start_time": 382.278,
"text": " And it started just making up chapter titles, extraneous plot details, all these things that are maybe consistent with the surface level summary of a book, but nonetheless missed the mark. Now, ultimately, I think that this way of thinking about chatbots as either their mistakes or hallucinations or confabulations is a false dichotomy. And I think that largely the reason why it's a false dichotomy and one that we shouldn't accept either option for"
},
{
"end_time": 439.77,
"index": 16,
"start_time": 411.067,
"text": " is because these metaphors end up being objectionably anthropomorphic and reductive, right? In thinking that chat bots are the kinds of things that can hallucinate or confabulate, we're thinking about them like they're human beings, like they're agents with goals, that they have some kind of purpose, maybe to tell us the truth, maybe that's what they'll report to us if we ask them. And we're assuming that the reason that they make mistakes, that there's just one good explanation for all the mistakes that they make,"
},
{
"end_time": 468.985,
"index": 17,
"start_time": 440.179,
"text": " And there couldn't be multiple types of mistakes, multiple types of explanations. So if we're going to, I think, kick away this idea, I think what we need to recognize is that the way that ordinary people interact with chatbots and the way that we're tempted to, perhaps because of how they're designed, is we think about what they're doing as giving us testimony, as if they believe things and they're reporting those beliefs to us. But independent of some serious philosophy of mind,"
},
{
"end_time": 497.176,
"index": 18,
"start_time": 469.292,
"text": " I don't think we should think that chat bots have beliefs, that they are reporting things that they think to us. And I think to the extent that we can get away from this thinking, the extent that the ordinary person can get away from this thinking, we might be able to improve our ability to think critically, reflectively about this technology. So I instead suggest or wonder whether or not we could, instead of approaching it like a testifier, we could approach large language models like potential knowledge tools."
},
{
"end_time": 524.258,
"index": 19,
"start_time": 497.654,
"text": " perhaps related to Michael's conception of an epistemic tool. This is a conception that Garrett with his co-author Carlos developed as thinking about a distinction between, as sort of prefigured by your comments earlier today, distinction between technology that is an epistemic agent that has beliefs, potentially has knowledge, versus a tool that we use to gather information to try to form our own knowledge."
},
{
"end_time": 554.77,
"index": 20,
"start_time": 525.452,
"text": " So the third narrative that I think is maybe engineered primarily by the people who want to promote the use of these things, especially as a component of search engines, is this idea that these things are getting better. They're improving. More data is being added to them. Now that they have access to the internet, they're more reliable. And I don't disagree that more data and more training couldn't make large language models more reliable."
},
{
"end_time": 584.019,
"index": 21,
"start_time": 555.282,
"text": " But that's not a necessary consequence of additional data. So some of these tests of the abilities of large language models that show them scoring very well, in fact, also sometimes demonstrates a degradation of abilities in other areas and sometimes the addition of bias that was unexpected. So these kind of emergent properties or abilities of large language models, these like unpredicted but sharp differences in behavior,"
},
{
"end_time": 612.773,
"index": 22,
"start_time": 584.582,
"text": " are not always improvements. So they might be sort of like, oh, a jump in cognitive ability, the ability to say, instead of getting a B in calculus and A in calculus in a college course, may nonetheless be accompanied by unexpected differences in behavior that aren't desirable, unexpected inaccuracies. Now, more to the point though, ordinary users aren't going to know whether large language models that they use undergo changes at all. And to the extent that they do, they might assume that those changes"
},
{
"end_time": 642.312,
"index": 23,
"start_time": 613.166,
"text": " positively improve its reliability. So one of these narratives here that I think is worth taking a critical look at is this idea that these changes are necessarily improvements and that once we've achieved sort of a state where we can trust a large language model, that we don't run into a further problem where we can then ask, should we trust it in the future once it's been updated, once it's undergone additional training,"
},
{
"end_time": 669.821,
"index": 24,
"start_time": 643.302,
"text": " In conversation with Susan, we've called this problem the problem of diaconic justification. So the idea is that even if you had evidence at one point that a large language model was reliable, because of their unpredictability in these ways, that trust maybe shouldn't stick around after an update. You would have to then reestablish its reliability through whatever mechanism you initially established it."
},
{
"end_time": 699.241,
"index": 25,
"start_time": 670.998,
"text": " So I just want to close with a couple of things that I've been thinking about. So part of this story that I've been telling you about how my friends in particular, huge large language models, how they think about them, relies on this idea that they may be trusting them for the wrong reasons, that they're anthropomorphizing them. Now, I think one of the reasons that strikes me as most obvious about why we like to anthropomorphize large language models, chatbots in particular, is because they communicate to us linguistically."
},
{
"end_time": 727.312,
"index": 26,
"start_time": 699.906,
"text": " And I'm curious if there are other features of large language models that inclines us to trust them, and if they could be designed without those features so that we don't trust them for the wrong reasons. Another question I have is whether or not there are better metaphors or analogies for when chatbots make mistakes. So if you agree with me that there's something objectionable about saying that ChatGP3 hallucinates or confabulates, and maybe you don't agree with me on that,"
},
{
"end_time": 743.012,
"index": 27,
"start_time": 727.773,
"text": " Are there better ways to talk about the kinds of mistakes that these chatbots make? And finally, I'm curious, what kind of forthcoming evidence would we, in fact, need to trust a particular large language model that is reliable enough to trust? Thank you."
},
{
"end_time": 772.534,
"index": 28,
"start_time": 743.763,
"text": " This episode is brought to you by State Farm. Listening to this podcast? Smart move. Being financially savvy? Smart move. Another smart move? Having State Farm help you create a competitive price when you choose to bundle home and auto. Bundling. Just another way to save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state."
},
{
"end_time": 806.032,
"index": 29,
"start_time": 782.108,
"text": " Thank you, Stephen. Those were wonderful questions. And let me just ask the people on the panel if they have any ideas here. Mark. So this idea of anthropomorphizing LLMs I think is really interesting. I'm not sure it's possible to decouple it from that though, right? Because if you talk to something that talks like a human, it kind of like you naturally infer human characteristics to it."
},
{
"end_time": 834.718,
"index": 30,
"start_time": 806.323,
"text": " I mean, if you look at some of the early use cases of LLMs, or like the App Replica, which came out even before GPT-3 was available, I believe, and just all the instances of the new sort of fine-tuned chatbots that people have created using GPT, they're all, they all simulate people, right? They simulate, you know, like romantic partners. They simulate, you know, various other things that people want to talk to, right? People, right? So it's, I don't know that you can really escape that because"
},
{
"end_time": 859.991,
"index": 31,
"start_time": 835.06,
"text": " language is such a human thing that if you communicate with language with something else, like, you're naturally going to do that. Or another idea, another example, right? The Google engineer who was convinced that the chat bot was, had consciousness, right? Was absolutely convinced about that. Like, those kinds of things made me think that it's not really possible to decouple that, you know, humanization of these things from the tech itself."
},
{
"end_time": 882.346,
"index": 32,
"start_time": 862.09,
"text": " Yeah, that's absolutely, I agree with you Mark and I also would like to put something a little further in that, you know, these chatbots, they also change the way that humans speak to them and the way that they structure their language based on what those chatbots understand to some extent. And so if people are changing the way that they talk,"
},
{
"end_time": 911.869,
"index": 33,
"start_time": 882.671,
"text": " to chat bots because the chat bots can understand speech in certain ways and they don't understand it in other ways. For example, giving commands, et cetera. I think that it also changes the way that people relate to them. So it's a dual way. It's not just, you know, the chat bots to the people, but it's also how the people are changing the way that they are communicating and it's also a commodification of human language, which is something that I think is another thing that we have to be looking at."
},
{
"end_time": 942.108,
"index": 34,
"start_time": 913.2,
"text": " Yeah, no, I completely agree that once you build something whose purpose is to interact like a human, it would be bizarre not to anthropomorphize it, right? I mean, that's the whole point, that you can interact with it using the language that you know. But if anyone is worried about that and wants to resist that, then I guess following from my talk, I can strongly suggest to them, try submitting the same prompt over and over."
},
{
"end_time": 967.892,
"index": 35,
"start_time": 942.346,
"text": " right? Try, you know, rewinding it, right? And saying that, okay, yes, I can do this very, very non-human-like thing with it, right? I can just see the whole branching tree of possibilities that it could have given other than the one that it did. Now, I wanted to respond to something and, you know, if this was like a critique of what I thought you were saying,"
},
{
"end_time": 997.585,
"index": 36,
"start_time": 968.131,
"text": " People say that the language models are getting better. Just to be clear, I would not say that there is any a priori reason why we knew that GPT-4 had to be better than GPT-3, nor would I say that there are no examples where GPT-3 is better than GPT-4. I would just say that if you try them both out on a range of things, you will see that GPT-4 plainly obviously is better."
},
{
"end_time": 1027.5,
"index": 37,
"start_time": 998.148,
"text": " Yeah, I don't contest that at all. As someone who's used both of them. I was really struck upon reflection hearing you describe this sort of like shattering the illusion moment, right? When you rewind. So I've done this with art image generation, right? And it gets to the point where I just, I don't even know what I'm looking at. I just keep like trying out the same prompt and seeing what it'll do again. I wonder if maybe something like this, like you're suggesting might be effective is sort of like, oh yeah, this is just something that's following instructions. It's not, yeah."
},
{
"end_time": 1057.739,
"index": 38,
"start_time": 1028.899,
"text": " Just talking about trust and perhaps part of the problem is that we look at large language models today as, you know, expecting that they're this omnipotent sort of all-knowing model and, you know, ask you such broad questions and expect it to be, you know, 100% every time and, you know, is it perhaps what we're going to see is that there's going to be more focus on domain-specific models that I think are already showing that they are more predictable and more deterministic and that"
},
{
"end_time": 1081.817,
"index": 39,
"start_time": 1058.046,
"text": " Maybe eventually, you know, actually you can see it now I think with the mixed role models, right, which are basically a mix of models that ultimately it's, you know, it's going to be a collection of domain specific models and then just selecting the right model. And I think, but that seems like it's, that's probably going to be the path to, you know, developing trust by just having kind of more focused models as opposed to these generic omnipotent models."
},
{
"end_time": 1112.807,
"index": 40,
"start_time": 1083.251,
"text": " Yeah, I think that that does sound promising. So thinking about the applications in the medical field, right? If you had a model that was just trained on information about diagnosis, for example, that could be potentially like a very useful tool to quickly generate based on the symptoms. Here are a bunch of likely alternatives that I think like someone who is trained in medicine could use very reflectively using Michael's sort of understanding here about the limitations about what this thing can do. So I agree with you. I think one way"
},
{
"end_time": 1134.838,
"index": 41,
"start_time": 1113.183,
"text": " I have a question for Richard and anybody in the audience about that to follow up on that."
},
{
"end_time": 1163.422,
"index": 42,
"start_time": 1135.282,
"text": " How feasible is it to create effective domain-specific models in areas like medicine, for example, or autonomous vehicle development when the one percent situation could arise that actually seems to require troubleshooting from something like an AGI? Like, how would that work? And maybe I'm wrong about... I mean, I think, first of all, I think deep learning demonstrated that with their"
},
{
"end_time": 1191.305,
"index": 43,
"start_time": 1163.712,
"text": " the protein folding models, right? That they could be very effective in a very, you know, and get pretty amazing results from the very domain specific problem area, right? But I think it is the main specific, like because self-driving cars, clearly that is a problematic area and I think, you know, clearly getting the data to train with is, again, there. So, I think you're going to see certain domains advancing quicker than others, right?"
},
{
"end_time": 1217.961,
"index": 44,
"start_time": 1191.63,
"text": " And I think that's just a reality. And I know also talking from industry private sector, clearly we're not going to see the investment in the private sector. It's going to start slowing down if we can't get that deterministic, if we can't get results in a sort of domain specific environment. So I think it's an inevitability that we will see that. But of course,"
},
{
"end_time": 1238.848,
"index": 45,
"start_time": 1218.558,
"text": " Tesla shows us that full self-driving is a hard problem and the hard problems are not going to be solved overnight with this. This is actually a little bit more kind of what Mark was saying earlier a while ago. But maybe it's kind of on the questions that are going on here so maybe this is for Stephen as well."
},
{
"end_time": 1268.848,
"index": 46,
"start_time": 1240.452,
"text": " You know, when it comes to this issue, I kind of feel like we don't need to reinvent the wheel, right? I mean, didn't Weissenbaum have this exact worry after Eliza? I mean, there's a reason why right after Eliza happened, he wasn't concerned with the capabilities of AI or the capabilities of computer and technology progressing. He was worried about the way that we react to it when we come across it, right? He didn't look at it and go, Oh my God, computers are going to take over the world. He went, Oh my God, look at all these humans who walked up, asked, got a very quick script response and went, Oh my gosh, there's a person behind this. There's a mind."
},
{
"end_time": 1293.319,
"index": 47,
"start_time": 1269.087,
"text": " In some ways, I kind of think like, yeah, maybe it's much more capable than ELISA, but it feels like it's the same problem, right? I mean, am I crazy about this? Right, in that way. But yeah. I don't know what you think about that. Like, in the sense, do you think this is qualitatively a different problem than that? Like, this anthropomorphizing of technology or AI, it seems very similar. Yes, it's gotten better, it's gotten fancier, but yeah, I don't know."
},
{
"end_time": 1321.288,
"index": 48,
"start_time": 1293.592,
"text": " Yeah, I'm very sensitive to this like, is this a new problem or old problem question? I mean, Eliza came to mind immediately as this conversation started for me as sort of thinking about like, yeah, it doesn't take much for us to get convinced that we're talking to another human being or this thing we're interacting with is relevant, like a human being that we can think about it that way. I do wonder if this is like that, but much, much worse. And I'm also wondering if thinking about"
},
{
"end_time": 1345.401,
"index": 49,
"start_time": 1322.995,
"text": " This experience has got illustrated about the illusion being broken. If we could have more of those moments built into the experience of using this thing, would that be worth it or would that somehow cripple its function in some interesting way? Or could it wake us up to the fact that we shouldn't be anthropomorphizing it? Trust is a tricky issue because you think there's a right answer."
},
{
"end_time": 1372.005,
"index": 50,
"start_time": 1345.759,
"text": " And life is complex, life is ambiguous, life is sloppy. There's social dilemmas, there's trolley problems, there's more dilemmas and so forth. My fear is that if we trust an AI because it seems so expert, just like we trust a real expert, we assume that's the answer and life is ambiguous. There's many answers and it all depends upon the values you build into the system and so forth. And I think we might lose some of that if we have such an expert system like an AI that we don't doubt its credibility."
},
{
"end_time": 1403.37,
"index": 51,
"start_time": 1373.626,
"text": " There's this issue that social epistemologists are worried about called epistemic trespassing, where someone who is knowledgeable in one area enters another domain and says a bunch of stuff, even though you know their PhD was in mathematics, now suddenly they're saying things about philosophy. Now, saying that, I'm not saying no one should investigate fields other than their own or do interdisciplinary work, but the worry here is if we start thinking about, oh, this person or this chatbot is credible, after all, they say all these true things about mathematics,"
},
{
"end_time": 1430.503,
"index": 52,
"start_time": 1403.763,
"text": " Oh, but then we can ask it these harder, more difficult questions and thinking we can trust it because it showed off its expertise in one area. So we could be under this kind of illusion then that because it gets the simple problems right, that the more difficult problems are also ones that it would get right. Yeah, so just a couple responses. So as far as I could tell, a central reason why the academic sort of AI and cognitive science and linguistics communities"
},
{
"end_time": 1454.753,
"index": 53,
"start_time": 1430.742,
"text": " were like very, very slow to sort of react to GPT in, you know, 2020 or 2021, you know, is that they were all inoculated by Eliza, right? They had all learned the lesson from Eliza and from, you know, the Lubner Prize that came after it, that if you see something that looks like, you know, a superficially impressive chat bot, right, there is actually nothing interesting going on under the hood."
},
{
"end_time": 1484.548,
"index": 54,
"start_time": 1454.991,
"text": " And the only questions here are human psychology questions. Why would people be so stupid as to think that there's something under the hood when there isn't? And it was so strong that when there actually was something under the hood, they just could not see it in front of their faces. So that was the first thing. But the second thing is I'm sort of amused when people say, well, why has it been so much harder to build a self-driving car than to build a chat bot? And it seems like it's easier."
},
{
"end_time": 1514.684,
"index": 55,
"start_time": 1484.821,
"text": " The answer to that, in some sense, is that it's not harder at all. In fact, compared to making an 80% accurate chat bot, it's quite easier to make an 80% accurate driver. It's just that you now want 99.999% accuracy, and that's the only hard part. There's also this, where do you set the threshold? I would set it, personally, as"
},
{
"end_time": 1524.428,
"index": 56,
"start_time": 1514.991,
"text": " where it's about as safe or safer than a human driver but it looks like in practice people might not allow it until it's like a hundred or a thousand times safer."
},
{
"end_time": 1553.524,
"index": 57,
"start_time": 1525.145,
"text": " Okay, so I am a psychologist. I actually feel like I just landed in Mars the last two days with all the language and all you're doing. Oh, am I not talking to the mic? Okay. Can you hear me? And I'm the Vice Chair of the Board of Hospice and Quality Control Committee. Okay. So my background is not only psychology, it's a lot of medicine and it's a lot of corporate medicine and it's insurance companies. All right."
},
{
"end_time": 1580.162,
"index": 58,
"start_time": 1554.855,
"text": " Susan told me like three days before we were going to be on the panel. Now that's the worst thing for me because I am very OCD and I want to have a whole presentation already but I couldn't give you a presentation because I don't really... This is like a new learning experience. So I had articles, AI is changing every aspect of psychology."
},
{
"end_time": 1606.135,
"index": 59,
"start_time": 1580.606,
"text": " Then there was another article I got, how AI could transform the future of psychology profession, and it goes on and on. I'm not so sure. My doubts are I think it can be used for specific things, let's say like somebody is in recovery for addiction and wants to go to pick up a bottle of vodka,"
},
{
"end_time": 1635.691,
"index": 60,
"start_time": 1606.647,
"text": " They talk to their, I'm calling the chat box, Rose. I love flowers. And they talk to Rose and Rose gives them, you know, no, Rose doesn't give them vodka. Rose says, go to a meeting. You know, there's a whole specifics, call your coach, call your sponsor, on and on. So I see specific avenues for this,"
},
{
"end_time": 1664.531,
"index": 61,
"start_time": 1636.015,
"text": " For your side, all of your side, because I'm kind of like the alienated, isolated person here. What I can't see is, and I want you to answer, can you see, I think the most important quality, which I was saying, the Martin Buber, I do know a little philosophy, I and thou, how do you see developing or can it develop or can it not have empathy"
},
{
"end_time": 1691.903,
"index": 62,
"start_time": 1664.957,
"text": " And what do you see empathy as? And do you think that's a possibility? And I'm opening it up to everybody. That's my question. Well, I think that's a fantastic question. It's a question that nobody really here is going to know the answer. But it does seem to me like right now, I'm dubious. When we ask about what chatbots can do,"
},
{
"end_time": 1718.524,
"index": 63,
"start_time": 1693.285,
"text": " I am open to the idea, as I said, that they might have beliefs, they might have quite complicated mental states. And is it possible that they have emotions? We get into a complicated question about what those are. But certainly something like, I'm really hoping that they don't have emotions, like subtle ones, like resentment. That would be not good."
},
{
"end_time": 1744.104,
"index": 64,
"start_time": 1718.66,
"text": " Do not want resentment in my chat bot. But empathy, it would be nice if it had that. But at the moment, I think in order to have empathy, you need to care about something. And it's not really clear to me at the moment whether our chat bots have the capability to care about anything."
},
{
"end_time": 1774.019,
"index": 65,
"start_time": 1744.616,
"text": " But that sort of depends on what we mean by care. But if they don't, then I don't think they have empathy, because I think empathy requires having a caring attitude, at least possibly, towards the person that you're in that I-Thou relationship with. Anyway. Yeah, of course. I actually think the harder question isn't, because I don't think a chat bot by its very nature can ever have empathy, but I don't think that's the question. I think the question is whether or not people can perceive it as having empathy, which I think is"
},
{
"end_time": 1800.572,
"index": 66,
"start_time": 1774.309,
"text": " is considerably possible. I know lots of people that fake empathy. I mean, I think, and just to say you're not alone at this table, I'm a social scientist, not a computer scientist, so I join you here. I actually think the bigger danger here is social community capital, which is to say technology increasingly isolates us and"
},
{
"end_time": 1819.821,
"index": 67,
"start_time": 1801.032,
"text": " This great book is about Robert Putnam's bullying alone, the value of television is isolating, but imagine if I perceive a chat box as a person and that becomes my interaction, that satisfies my social need. I chat with the chat box, it chats back,"
},
{
"end_time": 1849.189,
"index": 68,
"start_time": 1820.026,
"text": " It pretends it has empathy. I perceive it as having empathy, whether it does or not. And then we create this sort of universe where I don't have to engage, interact or participate in the greater society around me. That I think is highly dangerous. And I think that, you know, we were already on our way before we had LLM, right? But now people can be like, I don't need to talk to anyone. I talked to Rose and Rose tells me all the things I want to know and feels bad about my day, right?"
},
{
"end_time": 1862.756,
"index": 69,
"start_time": 1849.684,
"text": " That is isolating and it's bad for society. It's bad for democracy. I mean, it's not functional and that's a problem for me. So whether it feels it or not, I don't know, but I can tell you if I perceive it does, then that's dangerous."
},
{
"end_time": 1882.056,
"index": 70,
"start_time": 1863.114,
"text": " These questions that people like to ask, does an AI have empathy? Does it have intentions? Does it have beliefs? You have to separate out the metaphysical part, the part that depends on the heart problem of consciousness or whatever, you know, what is his internal experience from the empirical part."
},
{
"end_time": 1905.094,
"index": 71,
"start_time": 1882.056,
"text": " from the part that could be measured in principle. And so I think that's what you were getting at. But I think one of the surprises if you spend some time with GDT is that it's much better at getting emotional nuance described. You know, rephrase this email to be a little bit less ass of a grasshopper."
},
{
"end_time": 1934.138,
"index": 72,
"start_time": 1905.094,
"text": " Whether the use of language problems for therapeutic reasons or for people who suffer from social anxiety to get practice. How well it works is an empirical question. We're two critical tribes that will tell us more about this. I don't think that we should arrogate to basically we can guess the answers to those questions from our own cheeks."
},
{
"end_time": 1963.609,
"index": 73,
"start_time": 1934.753,
"text": " So Kevin, I totally agree with you. I really think the key question is how humans relate to it and perceive these affective states within the system itself. But Miriam, I think you raised a really interesting question as to whether or not an LLM can actually have an affective state. So if it can demonstrate empathy, I think it would require some emotional state, correct? But I guess what I'm getting at is humans have multiple dimensions in which we cogitate. We can think about things in terms of"
},
{
"end_time": 1987.227,
"index": 74,
"start_time": 1963.865,
"text": " our five senses in how we perceive the universe we can you know not just language I mean language is certainly one medium upon which we think about things but like chat pots only have that one medium so they lack I don't know that you can necessarily encode some effective state simply in that language medium so I don't know that an LLM in its current state could potentially have an affective state that would be required for true empathy"
},
{
"end_time": 2014.309,
"index": 75,
"start_time": 1987.91,
"text": " Yes, sorry for going far away from empathy, but I think maybe what I'm saying can apply to this. So I just want to go back to the idea of attributing agency and what kind of underperformation we might be doing. So I think a distinction that there is a recent debate in developmental psychology trying to think what kind of cognitive systems those systems are."
},
{
"end_time": 2043.336,
"index": 76,
"start_time": 2014.309,
"text": " And so they make a distinction between learning from imitation and learning from exploration. I think it's a framework that Alison Coppenick and her team has been using. And I think maybe one idea is so those systems are probably learning through imitation. They kind of imitating a process that they learn from other agents, the agents that they find on internet, their information."
},
{
"end_time": 2059.991,
"index": 77,
"start_time": 2043.626,
"text": " But they're not doing the kind of exploration that a young child, a small child would have, like a kind of a truth-seeking cognitive system that go to explore the world and learning things from the world. So maybe what we might be seeing is that"
},
{
"end_time": 2090.026,
"index": 78,
"start_time": 2060.111,
"text": " the system I just like learning through imitation what is empathy is and learning how to respond in a way or to behave in an empathic way through this learning system. So I think this framework might be interesting because then we have to reframe the idea of hallucination. The hallucination is more a metaphor that is related to a truth seeking cognitive system and not an imitation learning system."
},
{
"end_time": 2120.538,
"index": 79,
"start_time": 2091.067,
"text": " Yeah, I just want to add to this framework. GBT assures me that it does not experience everything. All right, that was awesome. And Claudia, thank you for that distinction. That really helped. And Marion, everybody, that was wonderful. Stephen, Mark. So now, cookies. We actually probably need shots after talking about all this national security stuff."
},
{
"end_time": 2135.845,
"index": 80,
"start_time": 2120.947,
"text": " Firstly, thank you for watching, thank you for listening. There's now a website, curtjymungle.org, and that has a mailing list. The reason being that large platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever they like."
},
{
"end_time": 2162.329,
"index": 81,
"start_time": 2136.101,
"text": " That's just part of the terms of service. Now, a direct mailing list ensures that I have an untrammeled communication with you. Plus, soon I'll be releasing a one-page PDF of my top 10 toes. It's not as Quentin Tarantino as it sounds like. Secondly, if you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself"
},
{
"end_time": 2179.616,
"index": 82,
"start_time": 2162.329,
"text": " Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube."
},
{
"end_time": 2209.036,
"index": 83,
"start_time": 2179.838,
"text": " which in turn greatly aids the distribution on YouTube. Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, they disagree respectfully about theories and build as a community our own toe. Links to both are in the description. Fourthly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts."
},
{
"end_time": 2230.708,
"index": 84,
"start_time": 2209.036,
"text": " I also read in the comments"
},
{
"end_time": 2259.036,
"index": 85,
"start_time": 2230.708,
"text": " There's also PayPal. There's also crypto. There's also just joining on YouTube. Again, keep in mind it's support from the sponsors and you that allow me to work on toe full time. You also get early access to ad free episodes, whether it's audio or video. It's audio in the case of Patreon video in the case of YouTube. For instance, this episode that you're listening to right now was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough. Thank you so much."
},
{
"end_time": 2283.677,
"index": 86,
"start_time": 2271.766,
"text": " Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today and we'll give you a better deal. Now what to do with your unwanted bills? Ever seen an origami version of the Miami Bull?"
},
{
"end_time": 2301.8,
"index": 87,
"start_time": 2284.155,
"text": " Jokes aside, Verizon has the most ways to save on phones and plans where you can get a single line with everything you need. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal."
}
]
}
No transcript available.