Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

Michael Lynch: AGI, Epistemic Shock, Truth Seeking, AI Risks, Humanity

May 24, 2024 1:15:03 undefined

ℹ️ Timestamps visible: Timestamps may be inaccurate if the MP3 has dynamically injected ads. Hide timestamps.

Transcript

Enhanced with Timestamps
168 sentences 10,886 words
Method: api-polled Transcription time: 74m 23s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
[0:20] Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
[0:36] Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
[1:06] Hola, Miami! When's the last time you've been to Burlington? We've updated, organized, and added fresh fashion. See for yourself Friday, November 14th to Sunday, November 16th at our Big Deal event. You can enter for a chance to win free wawa gas for a year, plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow! No purchase necessary. Visit BigDealEvent.com for more details.
[1:34] We already know that everything we access on the internet, almost, is personalized. All the news that comes down, our Facebook feed, all the ads that we face when we're reading the New York Times, all these are personalized.
[2:03] to
[2:28] Michael Lynch is a professor of philosophy at the University of Connecticut whose research specializes in truth, democracy, ethics, and epistemology. This talk was given at MindFest, put on by the Center for the Future Mind, which is spearheaded by Professor of Philosophy Susan Schneider. It's a conference that's annually held where they merge artificial intelligence and consciousness studies and held at Florida Atlantic University. The links to all of these will be in the description
[2:54] There's also a playlist here for MindFest. Again, that's that conference, Merging AI and Consciousness. There are previous talks from people like Scott Aaronson, David Chalmers, Stuart Hammeroff, Sarah Walker, Stephen Wolfram, and Ben Gortzel. My name's Kurt Jaimungal, and today we have a special treat, because usually Theories of Everything is a podcast. What's ordinarily done on this channel is I use my background in mathematical physics, and I analyze various theories of everything
[3:18] Alright, thank you and thanks so much for being here. So,
[3:48] Just to put a little fear of God into you. Last year, as many of you know, Elon Musk, the richest man in the world, announced that he was going to pour his considerable resources into funding what he called a maximum truth seeking AI.
[4:12] Now, this was enough to cause a little worry on not the least of Rich for reasons having to do with the fact that he made this announcement in an interview with Tucker Carlson. But as Susan and Mark noted in their Nautilus piece published shortly thereafter, the way he had of framing his mission
[4:40] actually raises some deep and interesting epistemic questions, questions about knowledge. And I'm going to ask three of those questions today. In what sense or to what extent can we use generative AI as an epistemic tool to help us achieve true beliefs and knowledge? That's a question which in one sense I think is really easy to answer, but in another way is a little bit harder.
[5:08] How might using this epistemic tool affect our own human epistemic agency?
[5:25] Where by epistemic agency for this talk, all I mean is our capacity for deciding what to believe based on reasons. And I'm going to be particularly interested in how it affects our epistemic agency with relation to questions that we might be interested in that have social and political resonance. And then sort of an implicit question that is really actually on the front of my mind, but is going to be in the background of this talk is how does, how is all this going to affect democracy?
[5:56] All right. So the rough idea is I'm going to explore two different kinds of problems that we face in trying to use AI in a certain way, a certain kind of AI in a certain kind of way as an effective epistemic tool. And I'm going to say that these problems do actually pose some risks for our epistemic agency. And I'm going to say that these problems grow worse
[6:25] The greater we socially integrate, in a sense I'll explain, generative AI into various parts of our lives. So I've mentioned this term, epistemic tool. So I want to talk about what I mean by an epistemic tool. But in order to do that, we need to talk about tools a little bit in general. So one way we can evaluate, just one way, but a very common natural way to evaluate our use of tools is in terms of the reliability, right?
[6:55] a good tool, an effective tool, is one that is reliable in getting us the desired results. But, and this is a crucial distinction, I think, and one we're all probably too familiar with, a tool can be reliable in principle, that is, in ideal conditions or even just solid conditions, but it can be the case that we might not be able to use it reliably in actual conditions.
[7:25] And that might be because, first, there might be facts about us, like we're unable, perhaps for a variety of reasons, to actually use the tool reliably in a particular actual condition, or it might be something about the actual conditions departing from the ideal conditions. Example.
[7:45] I thought I'd use an example that would be really relevant here in Florida, the tool, a snowblower. Sure, all you Floridians are really familiar with this. Yeah, you know, well, in case you're not, I see some puzzle looks. Snow is the stuff that fall, white stuff that falls from the sky. And then in some parts of the country, it annoyingly collects on your driveway. So you purchase these things called snowblowers, which you use to throw the snow off the driveway.
[8:15] Ideally, though, when you buy a snow blower, if you have done that and you get it home, you'll see that the instructions, like a lot of tools we buy at Home Depot, will say, well, actually, this snow blower works to do these things that it says on the front of the box, as it were, but only in certain conditions. For example, the snow can't be wet. Okay, snow can't be wet. Okay, only dry snow.
[8:40] And, uh, you know, there's gotta be certain inclines can't, you know, the, the driveway can't be steep or something like that, et cetera. And of course you have to be of a particular weight to operate it, uh, particularly effectively and so on. So there's all sorts of ways that we're familiar in which a tool can be reliable in principle in certain conditions that it might've been designed to be reliable in, but we might not be able to use it reliably in other conditions. That's a pretty common sense distinction.
[9:05] And we might say that, look, insofar as our agency is going to be increased, our ability to do things is going to be increased, it's going to be increased in part by our ability to use the tool reliably. When we're really picking what tool we want to use in a particular actual conditions to get a job done, what we're worried about is picking one that we can use reliably in those conditions. We don't really care so much about whether it could be used by other people,
[9:34] Reliably in other conditions. All right. Okay, so Epistemic tools by epistemic tools. I mean for purposes of this talk anyway for a broad definition Methods like the scientific method machine like a calculator or source of information like CNN or New York Times or whatever That could be used to help you generate true beliefs and knowledge or not perhaps
[10:03] And we can say again that an epistemic tool is effective insofar as it can be used reliably. And we can say that epistemic agency, our ability, our capacity to form and decide what to believe based on reasons that can be increased if we can use our epistemic tools reliably.
[10:23] And obviously, we're here to talk about whether we can use generative AI as an epistemic tool reliably. And I'm interested in its particular use of it, one that was forecast all the way back in 2021. Scott today was talking about even further back, 2019, even way back down 2014 and so forth. I mean, it goes back really far. But if you can remember as far back as 2021,
[10:50] There was a paper, now well-known paper by Don Metzler of Google Research, who suggested that it would be great if we could use natural language processing techniques and large language models to create a system capable of producing results with or without actual understanding that are of the same quality as a human expert in a given domain. And in this paper, Metzler and et al, they
[11:17] raised about eight different problems that need to be solved in order for this to happen in the way that they hoped and whether or not those problems have been solved we now of course have perplexity chat gbt4 and so on google's bard muskrox etc etc
[11:37] And we have this now being integrated, some cases on an opt-in basis, into our search engines, and we have it accessing the live internet, as Scott talked about today. Now there's a particular vision of this that was proposed in this, the use of these tools, that was proposed by Metzler,
[12:00] And has been, you know, perplexity is an example of really following up on this, which is to use AI as a search replacement tool. In the paper by Metzler, he was suggesting really that we can re-envision search. We can re-envision search, replace search in the way that we now have grown accustomed to it, with the links and whatnot, with authoritative single responses that
[12:29] Perhaps or perhaps not, depending on the platform, might be footnoted with links to sources you can follow up on. So I'm going to call this type of use of AI. And again, is this the only way we can use these platforms? No, but it is the way that I'm interested in. I'm going to call that AISR. So the question is, can we use this as an epistemic tool, an effective one?
[12:54] and i think the typical answer is well of course we can dude just check it out and i think to many you know these these lovely gentlemen uh do have a point they do have a point um and of course bard agrees i asked bard uh about this i was like you know can you know can i get you to give me true answers and it's dude
[13:19] Dude, man, it's my primary goal, man. It's primary goal. Extra value meals are back. That means 10 tender juicy McNuggets and medium fries and a drink are just $8. Only at McDonald's. For limited time only. Prices and participation may vary. Prices may be higher in Hawaii, Alaska and California and for delivery. Et cetera. So end of talk, right? That settles that.
[13:48] All right, maybe not. Maybe not. Because epistemic tools are ineffective insofar as we can use them reliably to help us have true beliefs. But we might think that what I'll get to, I'm going to not explain this yet, the social integration of AISR itself might end up undermining our ability to use it reliably. That is, it could be that actually
[14:18] Embedding this type of AI in our everyday lives might itself cause problems that will prevent us from using it reliably, even if it were to be example, I am a reliable in the ideal conditions. And I think the answer to that is to that question is that the case is yes. And I think there are at least four reasons for thinking that we face some problems here.
[14:45] These four reasons are all going to be familiar to people in this room. There are four reasons that I like to label with the happy name of the Four Riders of the Epistemic Apocalypse. Okay, so let's just go through the Four Riders of the Epistemic Apocalypse. Number one. Well, one problem is AISR and
[15:10] AI in general, of course, but AISR, as again, these are not meant to be new. These are not original to me. These are just issues that I want to flag. There's the problem of weaponization. That is that state actors, political campaigns, and various other actors might seek to weaponize AI and AISR to feed people propaganda, misinformation, or even incite violence.
[15:39] This can happen for all sorts of reasons. We can obviously use AISR to help us generate propaganda, no doubt happening every second. We can try to game the information that chatbots consult and on which they're trained. That is, try to game various sources that you think the
[16:02] the chat about my consult when asked about certain things, and then deliberately construct, of course, weaponized AISR platforms. That is, deliberately construct a platform that's built to feed propaganda. Just as an antidote on this, when, as some of you may know, when Elon proposed his maximum truth-seeking AI to Tucker Carlson, Tucker's immediate response was, oh, you mean Republican AI. And he really said it. That is a joke.
[16:33] right was a joke right right okay well anyway i mean whatever your lenience the point is is that uh it does seem possible uh that you could do that uh maybe maybe not i don't know um another uh possible worrying thread is what i'm i call polarizing ai we might think of polar polarizing ai could be a weaponized ai that is actually
[16:58] constructed to push people to the extremes on certain issues. That is, divide people in a certain way in the way that, for example, Mark and I have been talking in breaks about how the Russia's Internet Research Agency has been so effective at doing. You can imagine working with certain platforms to try to get that done.
[17:25] This could, you know, in a sense, you might say, well, this is just a sort of byproduct of weaponization. But there's another way that this could happen. And that's, I think, possibly, I don't know, but seems to me that is something worth taking seriously, that is that this could happen organically.
[17:41] We already know that everything we access on the internet, almost, is personalized. All the news that comes down our Facebook feed, all the ads that we face when we're reading the New York Times or Fox News or what have you, all these are personalized to fit our personal preferences and our past history online and off. And that's fantastic when you're trying to figure out what to watch tonight.
[18:10] Right. Or what books to buy. It's awesome. It's not so fantastic as we all know when you're hunting for facts, because when you're only getting the facts in your, uh, in your searches and on your social media feed that fit your preexisting preferences, that's not a recipe for bursting your bubble. It's a recipe for hardening it. So that we already know is happening in the internet, right? Now imagine our responses on the chat bots.
[18:39] become personalized to that degree. Super helpful. Maybe, right? So something to think about, no doubt. And again, on all these, there's people in the room, Scott, who knows a lot more about this stuff than me. It's just to make sure that Scott's paying attention. I'm going to mention his name every five minutes. Okay, because God knows the content's not going to do it. All right.
[19:08] Such a jerk. I am a terrible person. So the third problem is another familiar problem, the problem of the self-poisoning well, which is that we already know that a ton of stuff on the internets is generated by the AIs. We don't know how much is actually generated by the AIs. There's a paper that was
[19:32] As I'm published this talk going around saying it's like 50%. I don't know. And of course, what do we mean by AIs? If we mean just algorithms and like all of it. So in any event, there's a lot of stuff, an increasing amount of stuff on the web already generated by AI. Some of that is infected by polarization, weaponization, and the aforementioned in the previous talk, hallucination. That is the compensated propensity for AI to sometimes make things up like human beings.
[20:01] And that may mean, in other words, that AI is poisoning the well from which AI drinks. It scrapes the internet, it gives back to the internet, scrapes the internet, gives back to the internet. And it could be that things are degrading. Don't know whether that's the case, not saying it's inevitable, but it's something to think about. The one that I'm actually really interested is the problem of trust collapse. This is my own term for a phenomenon that a lot of us have been worried about.
[20:30] And that is to the extent that the problems like one to three occur, it seems that we could suffer trust collapse in the sense that people will start to, if they become partly aware of these problems or even hear about these problems, or even just experience things in light of these problems without realizing that the problems are going on, they could get to a point where they start to trust less the information they're receiving.
[21:01] that they get to a point where they are no longer sure what to believe. So if that's the case, then it may be that consulting experts is going to be more problematic. People will become less trusting of them, but they might also become less trusting of the AISR. That is, they might just start to trust collapse. The other thing to keep in mind is that this process of trust collapse can be weaponized itself.
[21:31] and is being weaponized. The very idea that some people are trusting less can in fact be used by people to get them further confused. For example, by claiming that some footage that was actually taken of you was generated by AI as a candidate for public office in this country recently did.
[22:01] Right. Something actually this particular person did something was film doing that thing, those public remarks, and then later claimed that could be AI. Okay. It could have been, I guess. I don't know. And that's the point. I don't know. People start to worry about what to trust. That could be a problem.
[22:26] Now, I talked about AI and social integration. I gave you these examples first before telling you what I meant, partly just to provoke your imagination. What I'm trying to say here is that plausibly the threats raised by the Four Horsemen of the Epistemic Apocalypse get worse to the extent to which the AISR is widely adopted, and not just by individuals, but by governmental institutions, educational institutions, and corporate institutions.
[22:55] And to the extent to which AISR is normatively embedded. And by that I mean, to the extent to which it's sanctioned by and encouraged, its use is sanctioned and encouraged by those institutions. At least that's my hypothesis.
[23:14] So can we mitigate these risks? Well, yes, I really think we can. I hope so. And I know that lots of people in this room are worried about that. And some of you, Scott, are actually trying to mitigate them, work on it. Hear that sound?
[23:37] That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the internet's best converting checkout, making it 36% more effective than other leading platforms.
[24:03] There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone
[24:29] of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at shopify.com slash theories, all lowercase.
[24:55] Go to shopify.com slash theories now to grow your business no matter what stage you're in shopify.com slash theories. But it's still going to be the case that. Even if we can get a I'm more reliable in principle or more reliable with regard to certain sorts of questions that the four horsemen are still going to threaten our use of it in all sorts of contexts. Particularly in context which have social and political resonance.
[25:25] And as we all know, that's a tiny little context, right? Because nothing's politicized in this context, right? Nothing. It's not like the coffee we drink, the cars we drive, the clothes we wear, the things we say, none of that can ever be politicized. The political and social realm, it's a tiny little realm, very self-contained with neat borders. So I think that's an issue with our collective epistemic agency. And now I want to turn to the other problem that I mentioned.
[25:55] So the first problem was the problem of, well, could there be factors like the Four Horsemen of the Epistemic Apocalypse that could undermine our ability to use AI reliably? But there's another way we can evaluate our use of tools, another way we can evaluate our use of tools, and that is in terms of our ability to use those tools reflectively. So now I'll give you the definition first and then give you some examples.
[26:19] So I'm going to claim that you use a tool reflectively to the extent that, you know, you confidently use it to generate the right results, but you also understand its limits. You understand to some extent how it works and you're, you care about using it effectively. You actually have a sort of attitude of giving a crap. Okay. Now notice I define this as a matter of degree, right? Everything I've done so far has been a matter of degree. That's a, that's a choice. I'm not saying it's.
[26:46] You use it reflectively like a button that goes on and off. It's an extent, the extent to which you use it. So I'll give you some examples. Almost everybody here, I hope, can use a screwdriver pretty reflectively, right? You know how to turn it, right? You know, righty-tighty, right? You know that it's not particularly effective as a saw, right?
[27:12] You can defend why the Phillips head is the better one to use in this situation, and you care, at least when you're using it, about getting the job done. Okay, so we can do that. On the other hand, there can be situations in which we can be trained to use a tool and be trained to use it mechanically. So, for example, someone like myself might be trained to use a metal detector, right? Just trained to use a metal detector.
[27:41] A handheld one without knowing particularly how it works, without knowing its limits, like how much metal it can detect and wear, and without really caring whether it's effective, like, because it's not my job to care about that. I'm just worried about, I'm just doing the thing they told me to do. When that happens, I'm going to say that we're using a tool, not reflectively, but to a greater degree, more mechanically, which is to say that
[28:10] Rather than the tool becoming an extension of us, when we use tools less reflectively, we become extensions of them. Okay. I think pretty obviously agency, at least in my opinion, also increases to the extent that we use our tools, not reliably, but reflectively. The person is able to reflectively use a tool.
[28:39] can get more stuff done intuitively and more effectively than the person who's just, you know, waving the metal detector and not giving crap. Agency decreases, we might think, to the extent to which we aren't able to use it reflectively. And again, these are matters of degree. So can we use AISR reflectively? Well, sure we can. Of course we can. But there are some barriers to us being able to do so
[29:07] in a way that when it becomes socially integrated, that is barriers to the general use of it reflectively. One reason is that we use it reflectively to the extent to which we use it competently to help us generate true beliefs, but we understand its limits and we can defend it as reliable. But of course we can't defend it as reliable if we can't in a particular context use it reliably.
[29:34] Nor can we defend it as reliable in a particular context if we're not sure whether that's a context in which we, not Dave Chalmers, who could use it reliably in any context, but we, right, can use it reliably in that particular context. And the Four Horsemen of the Epistemic Apocalypse already have shown us that we have worries about being able to use it reliably. So if we can't use it reliably, then we're not going to be able to defend it as using it reliably. So therefore, we're not going to be particularly reflective in this sense in using it.
[30:03] Second problem. Capacity. This is pretty obvious.
[30:09] There's a sort of explainability problem with any black box AI, but certainly with LLMs, it's difficult to know why exactly, I mean, we can know why they work in sort of the big picture, but why they generate those particular results can be awful hard for us to understand, particularly if you're just straightforward civilians like myself, right? Who unlike people like Scott and Dave and many other people in this room, you know, I can barely, you know, do addition.
[30:36] Secondly, here's a paper from Sean Bender, who I disagree with some of their stuff, but this is an interesting remark from 2022. They say,
[31:00] AISR synthesizes results from different sources and it masks the range that is available rather than providing a range of sources in which the user can as it were play around in. So intuitively they seem to be suggesting that by just having a single authoritative expert response then you get the same problem you get when you just consult
[31:25] Uh, an expert, you might get the right results, but of course your own epistemic agency is in a sense being handed over right to the expert. That's not necessarily a bad thing in lots of contexts, but it does. That's evaluated was a good or bad thing is right now what I'm not doing. I'll get to that in a minute. I'm just asking, does it, in what ways might it hinder epistemic agency? And this seems to hinder in the reflective sense. And then there's this
[31:54] This other issue about social integration. And this is a, this is a point that I'm going to make, which is, you know, I'm not as sure of, uh, not, you know, I'm not certain of any of this stuff. I'm saying that's the moment in which we live. But I think again, that the more we widely adopt and socially embed that is normatively embed these tools, the less reflective we might become with them.
[32:22] And I'll give you a brief thought experiment to back that up. Call this the Delphi problem. So imagine a society that consults an oracle with regard to what to believe. So whenever they have a problem about what to believe, they consult the oracle. Now imagine the society does this over a period of generations.
[32:52] They ritualize the performance. They normatively embed it. That is, their institutions encourage people to consult the Oracle whenever they're figuring out the answer to a question. Like in school, consult the Oracle. Learning math, consult the Oracle. Writing stories, consult the Oracle. So it becomes ritualized. It becomes normatively embedded. It becomes habitualized. It's just what we do. After a while, people might even forget about why they did it.
[33:23] They're getting mostly good answers. Sometimes they don't know that they're getting good answers. Sometimes they do. Depends on the question.
[33:54] Because some questions, it's hard to verify what the Oracle says. But imagine they do. Well, to some extent, you might say, yay, then their epistemic agency is increasing, right? They've got a reliable tool. But in another sense, we might think they're missing something. They're missing that reflectiveness. They may even be missing the motivation to care about the reliability after a certain point, or at least with regard to some questions. That's the sort of worry that I have.
[34:25] Well, the implication, again, I said this is implicit in the talk, I'll bring it to the surface for a moment. I think epistemic agency is an important part of democratic practice. I think that when we engage in democratic practice, we ideally treat the political space, to borrow a phrase by the
[34:55] and hijack a phrase really from the philosopher Wilfred Sellers, we treat that political space as a space of reasons, or we should. That is when we engage in democratic practice, truly democratic practice, we try to engage in it as people who are trying to do the right thing together with other people and are trying to figure out what to believe together with other people.
[35:25] Democratic practice, if understood as a space of reasons, just is a space where we treat each other, or should, as equal moral agents and as equal epistemic agents in one basic sense. Not that we treat each other as equally good, I certainly don't think that of other people in my space all the time, and they don't think of it of me, nor do I think, am I saying that democratic practice requires us to treat each other as equally epistemically, that is that
[35:54] Treat each other as if we all know the same things, because we obviously don't. What it does though, require of us, is to treat each other in the sense as being equally capable of doing something. Equally capable of coming up and making our own minds up about something. To the extent to which a political environment starts to treat people within that environment as not capable of making up their own minds.
[36:22] And so therefore maybe making up the minds for them to that extent that that environment becomes less democratic. So I think the punch line here is that epistemic agency is important for democracy. So when we worry about epistemic agency, we are to some extent, or should be worried about democracy. Okay, that's a that's a whole book in there.
[36:50] Coming out next year, but never mind about that anymore. That's a sketch. Here's a couple of objections. I mean, one objection you're going to no doubt raise again to me anyway is, well, gee, right, Lynch. Okay, fine. But can't we still use AI and AISR reliably and perhaps reflectively in some domains? And that is in some questions, and can't some people do it? And I want to say yes.
[37:18] As I've already said, yes, we can. Hallelujah. Isn't that awesome? Great. I use it. I hope sometimes I'm using it reliably and reflectively, although again, I'm not so, you know, you know, who knows. The question I was asking, though, was not that question. The question I'm concerned about is the question of what happens when these tools become normatively embedded and widely adopted.
[37:47] That's the question that I was worried about. Another thing I might add here, this isn't on the slide, but just in light of previous discussions, you'll notice here that I have not said one word until now about whether I think AI LMS are
[38:09] Have beliefs themselves, whether they can't, I haven't said, I haven't talked about them themselves, them being epistemic agents, they could be, you know, that's another question. It's a whole nother question. Are they epistemic agents? Right? Either, and are they reflective ones? Are they reliable ones? That's a different question. In this talk, I've been just, and we'll continue, I'm just agnostic about that. I don't know. Don't know the answer to that question.
[38:37] I'm interested in the answer, I just don't know the answer. What I am interested here though is how our use of these things as tools, if that's what they are, and not agents themselves. Our use of LLMs as tools in a particular way, how that affects our agency. That's an immediate problem. Another objection you might raise is what I call the same old same old.
[39:03] I mean, after all, you might point out, and correctly, that the four writers of the Epistemic Apocalypse, we've seen them come thundering into view with other technologies. It's not like there aren't other technologies that raise the problem of weaponization, polarization, yada, yada, yada, yada, trust collapse, right? I mean, yeah, right? Writing. So sometimes people will say to me, you're just acting like Socrates.
[39:29] Back when Socrates snarled at the possibility variety, which he did. Maybe it was Plato. Actually, you know, Plato, Socrates, hard to tell apart on these things. But, um, the point is, is that, yeah, I sort of am being grumpy like that. That's what I'm doing. Yes. But the fact that we've seen a problem before does not mean it's not a problem. Okay.
[39:53] So the fact that, yes, these problems have emerged before with other epistemic and informational technologies. Okay, but they might be emerging again and we should pay attention to them. What we need to ask is not only what can this tool get us, but what is it going to do to us?
[40:21] So I want to end on that note, echoing something that Scott said a couple hours ago. I think the human epistemic condition is inherently fragile. As it turns out, I think we're actually not particularly effective, not particularly reliable, not particularly reflective epistemic agents ourselves.
[40:49] As Kant said, we're constructed from very crooked timber. And it seems like that's a relevant thing to keep in mind. When we consider widely adopting and normatively embedding these sorts of technologies, because actually, I think, because we as individuals are such ineffective epistemic agents much of the time, particularly with regard to things that are of social and political relevance,
[41:19] Because we're clouded with bias and so forth. We need to promote and protect those institutions and practices that encourage reflective truth seeking and epistemic agency. That encourage epistemic agency. I mean, I think that the more AI or I worry, I might say, that the more we incorporate
[41:49] Hear that sound?
[42:19] That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms.
[42:45] There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone.
[43:11] of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at Shopify.com
[43:34] Go to shopify.com.
[43:45] This episode is brought to you by State Farm. Listening to this podcast? Smart move. Being financially savvy? Smart move. Another smart move? Having State Farm help you create a competitive price when you choose to bundle home and auto. Bundling. Just another way to save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state.
[44:29] Thank you, Michael, for your talk. Questions? Thank you for your talk. I have a question, right? So you talk about the epistemic condition and that implies to me at least some ethical component. For me, the whole process of discovering or finding knowledge and choosing to believe something is a choice. There's a lot of activity that goes into it. You have to weigh options and
[44:54] and come to certain conclusions. At what point do you think that we'll be able to program artificial intelligence with the capacity to make the kind of decisions that we make in our day-to-day lives? And do you think that it will take that much more development for them to make, for artificial intelligence to make more effective decisions than perhaps we can make because it's able to compute more factors at the same time?
[45:22] Not an expert on the technology of AI, as everyone who is in this room can tell. But from the people that I talked to, and certainly lifting into Scott earlier today, my own sense is that we can already use AI to help us make decisions that we make every day. And in some cases, we have, due to work of AI safety experts, we have installed
[45:51] Certain guardrails to make that, you know, make it more difficult for us to ask certain sorts of questions. But, uh, I mean, it certainly seems to be possible. Uh, I don't see it. I mean, again, I'm not an AI expert. Certainly seems worth thinking about. Let's put it that way that, you know, when we're going to start using AI as therapy therapists, I know people are working on that already. Right. And you know, uh, another way of another
[46:18] example that I think about is, is this one. One thing humans beings, I don't know about you guys, but you might've noticed that human beings in general, not anybody here in this room, but you've heard people find, you've heard that, you know, human beings are often make bad parenting decisions, right? You've heard that, right? There are some people out there that sometimes make bad parenting decisions. Now imagine a use of AI as, as a, you know, a parenting consultant.
[46:49] And now imagine, because I'm a philosopher and I'm not held prisoner by the facts, imagine that a society starts to think, well, actually, AI isn't perfect at the parenting decision, but they all have a lot better than the average person, so why don't we just have our kids raised by little AI, maybe put them in things like this?
[47:12] That way we can all kick back. And I mean, if you are a parent, particularly of a, let's say three year old who hasn't perhaps wished for AI parent may need to come along, right. Uh, and entertain your kid. I don't think that would be a society I'd want to live in. Um, I know I'm riffing off your question. Sorry. I apologize. I hope I that helped. It's okay. Thank you.
[47:35] Yeah, thank you. Great talk. Uh, so I would like to go back to the same old, same old objection. Uh, if actually your point is the same of, uh, of the case of other tools, uh, because it seems, I don't know why, uh, I, I don't use my brain anymore for make calculations, just very, very little ones. Probably when I got 80 years old, I'll be completely,
[48:03] Bad in mathematics. I don't feel as if I'm an extension of my calculator, although probably I became, I will become. So I don't know, I think you probably want to raise a point that is different from the other technologies. So what is this specific difference that make us the risk of becoming an extension of those tools that I don't see the same in other types of technologies?
[48:32] That's a great point. I agree with you that I'm a little farther along than you. I just use calculators for everything. Well, maybe not one plus one, but once it gets past that, it's too hard for me. Like you, that doesn't make me feel like less of an epistemic agent. It doesn't make me feel more of an extension of the calculator.
[48:59] But we may disagree. I think you actually, in the sense I was trying to explain and no doubt in a metaphorical, not particularly precise sense, I was, I think to some extent I am. I have no law. I'm a lot more like the person I imagined with regard to calculators that, you know, just, just, uh, uses the metal detector mechanically. Now the difference between me and the person I was imagining is I actually care about getting the right answer.
[49:27] When I'm calculating the tip, I want the right answer from my calculator. So to that extent, I am using it reflectively, right? Remember, reflectiveness comes in degrees and it has various components and you could be good at one component and not the other one, right? Like the calculator, I don't know how it works by magic, I think. So the idea, the metaphor of becoming an extension of a tool rather than it becoming an extension of us is a metaphor, but it's also meant to be something that comes in degrees.
[49:57] Now, I don't deny that there are also going to be differences, and I thank you for asking this, between the calculator and AISR. Obviously, there is. One has to do with scale. One has to do with the nature of the technology itself, the most obvious being that it can produce results that are the sorts of results that I would
[50:20] And this is echoing something Scott said, which I often say myself, which is that it produces results that I would judge to be the results of a human, right, were an idea in a different context, not sitting down my computer, actually knowingly talking to chat on GPT-4. That I think does make the tool different.
[50:42] It makes the tool different for all sorts of reasons. It raises questions that are similar to my consulting, and this is by design, my consulting an expert. This is why I raised the question of the oracle. If we think about these things as oracles, which they're not, but if you think about these things like the society was thinking about the oracle, there you might have said, imagine a society that has a bunch of experts, an expert panel.
[51:12] Right. On an everyday decision of what to believe, it consults the expert panel. All sorts of ways. That's a good thing, depending on what it is that we're consulting, right? If it's a medical issue, it's a, you know, it's a issue about the climate. I think consulting experts is the right way to go. What I'm, what I'm suggesting is that even in the consultation of experts, we've got to be aware that there is a sort of, uh, we're, we're taking our epistemic agency and handed it off to somebody else.
[51:41] Sometimes that's a good thing. In the calculator case with me, it's a good thing, but it's not necessarily always a good thing. And even if we thought it was always a good thing, the Four Horsemen of the Epistemic Apocalypse, I suggest, actually suggest that we're going to have problems doing that in a reliable way. A great question. I'm sorry. I can't be better than that. Yeah. Thank you, Michael, for the talk. I actually have two questions and I've been oscillating between which one to ask.
[52:07] So I'm going to stick with this one, actually. The easy one, the one that's easy for me to answer. Actually, it's actually a clarification about how you're kind of just defining an epistemic tool in this situation at the very early slide. And I was just wondering, because it looked like in there it had, at the very end of that definition, epistemic tool had something to do with producing beliefs or knowledge. Yeah. And
[52:33] I hope this isn't pedantic. In some ways, I wonder if it's not important to make a distinction between epistemic tools, right, which are used by what we typically associate as being humans of epistemic agency, which produce some kind of epistemic output, right? Beliefs, knowledge of it might be, and then epistemic producers, right? And it seems like to a large extent, right?
[52:54] A lot of people see large language models as kind of epistemic producers. It's telling me something that would typically associate as being belief or knowledge expressed by a human. It's unfortunate that it's so good at language because that's how we express epistemic statements or propositions that we can evaluate for truth. But I'm just wondering if it's in some ways the definition there makes it look like epistemic tools are part of the process of generating belief or knowledge, which is true, but it also makes it sound like they're generating it. But it seems like this is actually a distinction.
[53:24] And what we're looking for when we're creating, say, AI or AGI, we're looking for those things which are epistemic producers in their own right that have that epistemic agency. And it feels like in that situation, that's the production, right? Or that's producing producer knowledge. And that way it could be kind of pedantic. Maybe you're just like, actually, I meant to say the second thing. No, this is helpful. I think this is helpful. I was not claiming, uh, that the chatbots are, are epistemic agents. They may be.
[53:54] Right. I see your point that when we're using an epistemic tool, maybe this will help. I think it's I like your way of putting it. When we're using an epistemic tool, we're engaged in a process, the process of using the tool, and also our own whatever our own cognition, if any, in relation to that, right, which may not be much right, like in the like, in my case, with the calculator, right, no cognition, empty blank slate, right. It
[54:22] The process that we're engaging in, I'm claiming is one in which were the goal of the process is to generate an epistemic output. Okay. Um, I'm remaining neutral on whether the AI itself is it itself has epistemic outputs in the sense which I'm using that term. That is, as you correctly noted, beliefs.
[54:49] So for it to have epistemic being an epistemic agent on my account, it would have to be capable of deciding what to believe based on reasons that would require it to have beliefs and other things. Do they have beliefs? I don't know. I don't know. Like I literally don't know.
[55:08] Sometimes I think they might. I mean, it depends on what you mean by belief, right? If you're Dan, I think this is a time in which the instrumental stance, right? Dan Dennett's instrumental stance, if you took that, I mean, the intentional stance, excuse me, it's an instrumentalist position and the intentional stance, the intentional stance where you're, you know, things have beliefs in so far as you take a certain stance towards them. Well,
[55:33] that's starting to look to me like a plausible stance to take up with regard to AI in some contexts. But I don't know enough yet to feel like that is whether that's warranted or not. So I remain neutral. Eternalism versus externalism about content in philosophy of mind could be an interesting distinction as well when it comes to belief. Yeah, absolutely. I mean,
[56:00] Questions of what content is? Right now, all I can say is what we've already said, which is, do the generated states have content? Well, in the following sense, their answers, that is the strings of text, are such that we take them to express propositions in the language in which we are interpreting them.
[56:28] I don't know the answer to any of those questions, or any other probably. These last couple of questions actually
[56:58] covered some of the things I wanted to ask, which is pretty cool. But the thing that I'm pretty concerned about, philosophically especially, is this kind of dependence that we're having on all of these tools that we make in the sense that what used to be an extension of us, they're almost starting to use us now as tools in a sense. I was talking earlier about how plants and all these different things are essentially using us to propagate. So I wonder, in terms of how we're trying to replicate a lot of human cognitive capabilities with AI,
[57:28] and computation. What's the minimum amount of tools, regardless if it's technological, not whatever words you want to ascribe to it? Why are we not more focused on figuring out the most independent way to increase our own abilities? There are people out there that have extraordinary creative artistic abilities. There's savants, you've probably heard of them. They have immense ability to calculate.
[57:56] You know, that would give a lot of people a run for their money of what they can quickly, you know, put in their calculator. So I'm just kind of interested in why haven't we started to look more into that in terms of, of changing our output as opposed to just having machines do it. Hopefully I said that well. Yeah, I don't, I don't know. I mean, I, I mean, a couple of things I would say is, you know, it does seem to me that a lot of the people who have been interested in
[58:24] Socially integrating AI, the sort of AI we're talking about, are in good faith actually interested in helping us become better epistemic agents. I mean, right? I mean, I think you would agree. I mean, like, I'm not impute. Neither of us are impugning. I mean, there's some people are going to have bad intentions. Some people are going to have the intention only to make money, but other people are in good faith trying to help us become better epistemic agents. And to some extent, I think they're being wildly successful.
[58:52] I think you're, you're with that qualification. I think your note about, well, another way to think about this is about approach. These sorts of problems is to try to figure out how to make human beings more productive on their own, how to become more creative people, how to, how to scale up creativity. That would be a really cool thing. Uh, if we could do that. Haven't figured out yet how to do it. Sorry. Uh, but.
[59:22] Hi. In the last three months, I've been a substitute teacher at middle school. This has been quite an experience for me. Thank you, sir, for your service. I appreciate that. Seriously. There you go. All right. And what I have learned is that the students there do not know how to do anything. They know how to get an answer
[59:50] They do not know how to develop that answer. And that is definitely coming from their ability to search and to find answers in other ways. And I just wondered how that fits into here about this knowledge base, the knowledge versus the answer versus how to get to an answer. The most dramatic one was to me was when I was conducting band, which is something I really love doing. And I got to a certain point and the student says, well, the teacher hasn't told us how to do it. And it was just the same notes
[60:20] that they had been playing before almost. So it's the same thing. How, how do you do it versus what is it? Right. I think this is something that we've all been worried about with education since we, we, we, the idea of widespread education became socially integrated, which is how to do it at scale in a way that actually
[60:48] Nourishes the creative part of the human being right the part that wants to figure things out That wants to to echo a comment I made earlier today that wants To push the boulder up the hill themselves, right? That isn't just worried about the boulder being on but the top of the hill You know, so yeah, the thing that you're worried about is the thing that I'm worried about with my
[61:17] University students that extent to which we might say going back to the same old same old we've been worried about this as I said at the top of my answer to you since the beginning of education the worry is now I think a lot of us have is that that this particular tool is so effective it's so good that sort of questions that we've had with other tools including like just Google search
[61:45] Uh, with writing, with books, with calculators, these sorts of questions we had before the scale, independently of there's a difference in, you know, differences. Let me put it this way. Differences of scale that are big enough become differences in kind, which is maybe what I should have said to Claudia. Like what's the real difference? Well, the differences is one of great scale, which eventually becomes a difference in kind.
[62:09] I mean, the difference between the horse and buggy and the car. I mean, somebody might say, well, what are you getting all worried? It's not that different. It just goes faster. Well, that would be to really underestimate the difference between those technologies. So I think that you're right to be worried about that. And I think we as a society need to start, as Scott was telling us earlier today, we really need to start taking some of these questions very, very seriously right now.
[62:39] Hi, those are first of all, really interesting talk. Thank you.
[63:08] I guess, so when you talk about the threats of AI and we talk about epistemic agency and democratic politics, I guess I'm interested in how do you, what's your view on how does that factor in with NICOM's censorship and restrictions on users using these tools and giving the policies that a lot of companies have taken with that, I guess, do you think there should be less or more restrictions or maybe it's okay how it is now or maybe it's not relevant at all? I don't know.
[63:39] Super relevant question i have been thinking about it don't feel i feel at this point i'm sorry to keep saying this but i think this is a situation where a lot of us here today at this conference have been with regard to ai is that things are moving very quickly and it's really hard to give particularly reflective answers to you know when when you're worried about a moving target.
[64:01] Hear that sound? That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms.
[64:29] There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone
[64:55] of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothies, and Brooklynin. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at Shopify.com
[65:19] That said, clearly if we're going to institute these tools on a widespread basis, we need to get better. We need more prompt training, right? If we're going to use them, at least
[65:48] should be able to
[66:19] Not particularly, not a great, you know, group of punting. I never used a machine gun to shoot things, but I guess some people do now. Um, but we do even, you know, I mean, you don't see people saying, Hey, let's pass a law that will actually do see this, but you don't see many responsible people saying, Hey, let's hand out tanks. Right. Um, uh, you know, people are generally like, Whoa, whoa, whoa, whoa.
[66:46] Maybe I would tank to me, but not my neighbor, right? I hate that guy. Um, so there are all sorts of things that we do and on the informational sphere, that's certainly the case. I mean, we think about the terrible things that Congress was originally worried about, about child pornography, those sorts of things. I think there's a lot of agreement, right? You know, getting AI to help you build a bomb, right? Is, is a scary thought. In fact, I'm even sorry for mentioning it as it's a trigger warning, right?
[67:15] All these things. So I think we're doing as we're doing the best that we can right now. And I think, you know, you can talk to Scott and other people who are AI safety experts to think about what the problems are and what else we should be doing. But I'm not I don't think of this as like, you know, clearly censorship could be an issue at some point. But I don't think that's really the worry that I have right now. Right. Hi.
[67:44] Thank you for the talk. I think it's great, especially when you mentioned the ability of reflective use of tools. I think it applies particularly to epistemic tools.
[67:56] But I'm just wondering if you have any practical suggestions or how we can actually make people to use tools reflectively, whether it's by policy regulations, social norm, education, or in any realm that you think, especially in terms of exonity, not exposed. So it's not just that, you know, you use the tools badly and then you get punished. But how do we encourage people to use that? Yeah. Yeah. Great question. Yeah. I mean, I think
[68:25] This is the end of the session. Thank God. So I don't have to actually give you a lot of great detail. And again, I want to remind you that I'm a philosopher, not a policy person. So I'm good at pointing out problems, not necessarily solving them. This is truth and advertising people, right? I work in, I'm in business school, so I'm looking for solutions. I know. And thank, I'm glad that you are. I think broadly speaking though, we can give some general solutions that are, that we really need to take more seriously.
[68:53] Right now in this country and a variety of countries around the world, there are certain institutions that are devoted to the reflective pursuit of knowledge that are under attack. And those are institutions like this one and other ones. And I think right now we need to do a better job of protecting and promoting the work of those institutions.
[69:19] I think those, these institutions, including my own and other institutions have not helped things themselves. I mean, we're not often very good at sort of marketing as it were our own, uh, contribution to society. Right. Which I think goes beyond just getting people jobs, although that's important part of it, but actually making them into more reflective democratic citizens. I mean, I believe that John Dewey was right. That's the goal of education to get people to be better democratic citizens.
[69:51] I also think that clearly our ability to transmit information, what we call news, to people in a reliable way has become compromised, as we all know, in recent years. I think that what we sometimes call the news media, the traditional news media, right, has obviously, it's a disappearing, possibly doomed financial model.
[70:19] for transmitting reliable information. If it is doomed, we need to quickly come up with another model. I have thoughts about that. But it may not be doomed if we could intervene at a societal level to try to promote and protect those institutions. Because I think those are the things, those institutions, the two I just named, together with another institution, the legal system,
[70:47] That really are the three pillars that stand between us and the end of democracy, something which, like many of you here, I'm a little worried about. Thank you.
[71:07] Firstly, thank you for watching, thank you for listening. There's now a website, curtjymongle.org, and that has a mailing list. The reason being that large platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever they like.
[71:22] That's just part of the terms of service. Now, a direct mailing list ensures that I have an untrammeled communication with you. Plus, soon I'll be releasing a one-page PDF of my top 10 toes. It's not as Quentin Tarantino as it sounds like. Secondly, if you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself
[71:48] Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube,
[72:06] Which in turn greatly aids the distribution on YouTube. Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, they disagree respectfully about theories and build as a community our own toe. Links to both are in the description. Fourthly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts.
[72:35] I also read in the comments
[72:55] and donating with whatever you like there's also paypal there's also crypto there's also just joining on youtube again keep in mind it's support from the sponsors and you that allow me to work on toe full time you also get early access to ad free episodes whether it's audio or video it's audio in the case of patreon video in the case of youtube for instance this episode that you're listening to right now was released a few days earlier
[73:19] Every dollar helps far more than you think either way your viewership is generosity enough. Thank you so much
[73:38] Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store.
[74:02] Ever seen an origami version of the Miami Bull? Jokes aside, Verizon has the most ways to save on phones and plants where everyone in the family can choose their own plan and save. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal.
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 5543,
  "audio_duration_seconds": 4463.44,
  "completed_at": "2025-11-30T23:47:30Z",
  "segments": [
    {
      "end_time": 20.896,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
    },
    {
      "end_time": 36.067,
      "index": 1,
      "start_time": 20.896,
      "text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
    },
    {
      "end_time": 64.514,
      "index": 2,
      "start_time": 36.34,
      "text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
    },
    {
      "end_time": 93.456,
      "index": 3,
      "start_time": 66.852,
      "text": " Hola, Miami! When's the last time you've been to Burlington? We've updated, organized, and added fresh fashion. See for yourself Friday, November 14th to Sunday, November 16th at our Big Deal event. You can enter for a chance to win free wawa gas for a year, plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow! No purchase necessary. Visit BigDealEvent.com for more details."
    },
    {
      "end_time": 123.37,
      "index": 4,
      "start_time": 94.531,
      "text": " We already know that everything we access on the internet, almost, is personalized. All the news that comes down, our Facebook feed, all the ads that we face when we're reading the New York Times, all these are personalized."
    },
    {
      "end_time": 146.084,
      "index": 5,
      "start_time": 123.695,
      "text": " to"
    },
    {
      "end_time": 174.07,
      "index": 6,
      "start_time": 148.234,
      "text": " Michael Lynch is a professor of philosophy at the University of Connecticut whose research specializes in truth, democracy, ethics, and epistemology. This talk was given at MindFest, put on by the Center for the Future Mind, which is spearheaded by Professor of Philosophy Susan Schneider. It's a conference that's annually held where they merge artificial intelligence and consciousness studies and held at Florida Atlantic University. The links to all of these will be in the description"
    },
    {
      "end_time": 198.114,
      "index": 7,
      "start_time": 174.07,
      "text": " There's also a playlist here for MindFest. Again, that's that conference, Merging AI and Consciousness. There are previous talks from people like Scott Aaronson, David Chalmers, Stuart Hammeroff, Sarah Walker, Stephen Wolfram, and Ben Gortzel. My name's Kurt Jaimungal, and today we have a special treat, because usually Theories of Everything is a podcast. What's ordinarily done on this channel is I use my background in mathematical physics, and I analyze various theories of everything"
    },
    {
      "end_time": 225.589,
      "index": 8,
      "start_time": 198.114,
      "text": " Alright, thank you and thanks so much for being here. So,"
    },
    {
      "end_time": 251.527,
      "index": 9,
      "start_time": 228.916,
      "text": " Just to put a little fear of God into you. Last year, as many of you know, Elon Musk, the richest man in the world, announced that he was going to pour his considerable resources into funding what he called a maximum truth seeking AI."
    },
    {
      "end_time": 279.735,
      "index": 10,
      "start_time": 252.278,
      "text": " Now, this was enough to cause a little worry on not the least of Rich for reasons having to do with the fact that he made this announcement in an interview with Tucker Carlson. But as Susan and Mark noted in their Nautilus piece published shortly thereafter, the way he had of framing his mission"
    },
    {
      "end_time": 308.439,
      "index": 11,
      "start_time": 280.469,
      "text": " actually raises some deep and interesting epistemic questions, questions about knowledge. And I'm going to ask three of those questions today. In what sense or to what extent can we use generative AI as an epistemic tool to help us achieve true beliefs and knowledge? That's a question which in one sense I think is really easy to answer, but in another way is a little bit harder."
    },
    {
      "end_time": 325.009,
      "index": 12,
      "start_time": 308.643,
      "text": " How might using this epistemic tool affect our own human epistemic agency?"
    },
    {
      "end_time": 355.282,
      "index": 13,
      "start_time": 325.52,
      "text": " Where by epistemic agency for this talk, all I mean is our capacity for deciding what to believe based on reasons. And I'm going to be particularly interested in how it affects our epistemic agency with relation to questions that we might be interested in that have social and political resonance. And then sort of an implicit question that is really actually on the front of my mind, but is going to be in the background of this talk is how does, how is all this going to affect democracy?"
    },
    {
      "end_time": 385.52,
      "index": 14,
      "start_time": 356.203,
      "text": " All right. So the rough idea is I'm going to explore two different kinds of problems that we face in trying to use AI in a certain way, a certain kind of AI in a certain kind of way as an effective epistemic tool. And I'm going to say that these problems do actually pose some risks for our epistemic agency. And I'm going to say that these problems grow worse"
    },
    {
      "end_time": 414.565,
      "index": 15,
      "start_time": 385.913,
      "text": " The greater we socially integrate, in a sense I'll explain, generative AI into various parts of our lives. So I've mentioned this term, epistemic tool. So I want to talk about what I mean by an epistemic tool. But in order to do that, we need to talk about tools a little bit in general. So one way we can evaluate, just one way, but a very common natural way to evaluate our use of tools is in terms of the reliability, right?"
    },
    {
      "end_time": 444.258,
      "index": 16,
      "start_time": 415.196,
      "text": " a good tool, an effective tool, is one that is reliable in getting us the desired results. But, and this is a crucial distinction, I think, and one we're all probably too familiar with, a tool can be reliable in principle, that is, in ideal conditions or even just solid conditions, but it can be the case that we might not be able to use it reliably in actual conditions."
    },
    {
      "end_time": 464.548,
      "index": 17,
      "start_time": 445.23,
      "text": " And that might be because, first, there might be facts about us, like we're unable, perhaps for a variety of reasons, to actually use the tool reliably in a particular actual condition, or it might be something about the actual conditions departing from the ideal conditions. Example."
    },
    {
      "end_time": 495.009,
      "index": 18,
      "start_time": 465.043,
      "text": " I thought I'd use an example that would be really relevant here in Florida, the tool, a snowblower. Sure, all you Floridians are really familiar with this. Yeah, you know, well, in case you're not, I see some puzzle looks. Snow is the stuff that fall, white stuff that falls from the sky. And then in some parts of the country, it annoyingly collects on your driveway. So you purchase these things called snowblowers, which you use to throw the snow off the driveway."
    },
    {
      "end_time": 520.418,
      "index": 19,
      "start_time": 495.418,
      "text": " Ideally, though, when you buy a snow blower, if you have done that and you get it home, you'll see that the instructions, like a lot of tools we buy at Home Depot, will say, well, actually, this snow blower works to do these things that it says on the front of the box, as it were, but only in certain conditions. For example, the snow can't be wet. Okay, snow can't be wet. Okay, only dry snow."
    },
    {
      "end_time": 544.991,
      "index": 20,
      "start_time": 520.845,
      "text": " And, uh, you know, there's gotta be certain inclines can't, you know, the, the driveway can't be steep or something like that, et cetera. And of course you have to be of a particular weight to operate it, uh, particularly effectively and so on. So there's all sorts of ways that we're familiar in which a tool can be reliable in principle in certain conditions that it might've been designed to be reliable in, but we might not be able to use it reliably in other conditions. That's a pretty common sense distinction."
    },
    {
      "end_time": 574.172,
      "index": 21,
      "start_time": 545.452,
      "text": " And we might say that, look, insofar as our agency is going to be increased, our ability to do things is going to be increased, it's going to be increased in part by our ability to use the tool reliably. When we're really picking what tool we want to use in a particular actual conditions to get a job done, what we're worried about is picking one that we can use reliably in those conditions. We don't really care so much about whether it could be used by other people,"
    },
    {
      "end_time": 602.329,
      "index": 22,
      "start_time": 574.787,
      "text": " Reliably in other conditions. All right. Okay, so Epistemic tools by epistemic tools. I mean for purposes of this talk anyway for a broad definition Methods like the scientific method machine like a calculator or source of information like CNN or New York Times or whatever That could be used to help you generate true beliefs and knowledge or not perhaps"
    },
    {
      "end_time": 622.295,
      "index": 23,
      "start_time": 603.268,
      "text": " And we can say again that an epistemic tool is effective insofar as it can be used reliably. And we can say that epistemic agency, our ability, our capacity to form and decide what to believe based on reasons that can be increased if we can use our epistemic tools reliably."
    },
    {
      "end_time": 649.991,
      "index": 24,
      "start_time": 623.524,
      "text": " And obviously, we're here to talk about whether we can use generative AI as an epistemic tool reliably. And I'm interested in its particular use of it, one that was forecast all the way back in 2021. Scott today was talking about even further back, 2019, even way back down 2014 and so forth. I mean, it goes back really far. But if you can remember as far back as 2021,"
    },
    {
      "end_time": 677.21,
      "index": 25,
      "start_time": 650.35,
      "text": " There was a paper, now well-known paper by Don Metzler of Google Research, who suggested that it would be great if we could use natural language processing techniques and large language models to create a system capable of producing results with or without actual understanding that are of the same quality as a human expert in a given domain. And in this paper, Metzler and et al, they"
    },
    {
      "end_time": 697.398,
      "index": 26,
      "start_time": 677.415,
      "text": " raised about eight different problems that need to be solved in order for this to happen in the way that they hoped and whether or not those problems have been solved we now of course have perplexity chat gbt4 and so on google's bard muskrox etc etc"
    },
    {
      "end_time": 719.753,
      "index": 27,
      "start_time": 697.841,
      "text": " And we have this now being integrated, some cases on an opt-in basis, into our search engines, and we have it accessing the live internet, as Scott talked about today. Now there's a particular vision of this that was proposed in this, the use of these tools, that was proposed by Metzler,"
    },
    {
      "end_time": 749.531,
      "index": 28,
      "start_time": 720.179,
      "text": " And has been, you know, perplexity is an example of really following up on this, which is to use AI as a search replacement tool. In the paper by Metzler, he was suggesting really that we can re-envision search. We can re-envision search, replace search in the way that we now have grown accustomed to it, with the links and whatnot, with authoritative single responses that"
    },
    {
      "end_time": 773.746,
      "index": 29,
      "start_time": 749.701,
      "text": " Perhaps or perhaps not, depending on the platform, might be footnoted with links to sources you can follow up on. So I'm going to call this type of use of AI. And again, is this the only way we can use these platforms? No, but it is the way that I'm interested in. I'm going to call that AISR. So the question is, can we use this as an epistemic tool, an effective one?"
    },
    {
      "end_time": 798.66,
      "index": 30,
      "start_time": 774.462,
      "text": " and i think the typical answer is well of course we can dude just check it out and i think to many you know these these lovely gentlemen uh do have a point they do have a point um and of course bard agrees i asked bard uh about this i was like you know can you know can i get you to give me true answers and it's dude"
    },
    {
      "end_time": 827.261,
      "index": 31,
      "start_time": 799.019,
      "text": " Dude, man, it's my primary goal, man. It's primary goal. Extra value meals are back. That means 10 tender juicy McNuggets and medium fries and a drink are just $8. Only at McDonald's. For limited time only. Prices and participation may vary. Prices may be higher in Hawaii, Alaska and California and for delivery. Et cetera. So end of talk, right? That settles that."
    },
    {
      "end_time": 856.391,
      "index": 32,
      "start_time": 828.148,
      "text": " All right, maybe not. Maybe not. Because epistemic tools are ineffective insofar as we can use them reliably to help us have true beliefs. But we might think that what I'll get to, I'm going to not explain this yet, the social integration of AISR itself might end up undermining our ability to use it reliably. That is, it could be that actually"
    },
    {
      "end_time": 885.606,
      "index": 33,
      "start_time": 858.217,
      "text": " Embedding this type of AI in our everyday lives might itself cause problems that will prevent us from using it reliably, even if it were to be example, I am a reliable in the ideal conditions. And I think the answer to that is to that question is that the case is yes. And I think there are at least four reasons for thinking that we face some problems here."
    },
    {
      "end_time": 910.401,
      "index": 34,
      "start_time": 885.913,
      "text": " These four reasons are all going to be familiar to people in this room. There are four reasons that I like to label with the happy name of the Four Riders of the Epistemic Apocalypse. Okay, so let's just go through the Four Riders of the Epistemic Apocalypse. Number one. Well, one problem is AISR and"
    },
    {
      "end_time": 938.66,
      "index": 35,
      "start_time": 910.623,
      "text": " AI in general, of course, but AISR, as again, these are not meant to be new. These are not original to me. These are just issues that I want to flag. There's the problem of weaponization. That is that state actors, political campaigns, and various other actors might seek to weaponize AI and AISR to feed people propaganda, misinformation, or even incite violence."
    },
    {
      "end_time": 962.415,
      "index": 36,
      "start_time": 939.172,
      "text": " This can happen for all sorts of reasons. We can obviously use AISR to help us generate propaganda, no doubt happening every second. We can try to game the information that chatbots consult and on which they're trained. That is, try to game various sources that you think the"
    },
    {
      "end_time": 991.886,
      "index": 37,
      "start_time": 962.619,
      "text": " the chat about my consult when asked about certain things, and then deliberately construct, of course, weaponized AISR platforms. That is, deliberately construct a platform that's built to feed propaganda. Just as an antidote on this, when, as some of you may know, when Elon proposed his maximum truth-seeking AI to Tucker Carlson, Tucker's immediate response was, oh, you mean Republican AI. And he really said it. That is a joke."
    },
    {
      "end_time": 1017.619,
      "index": 38,
      "start_time": 993.387,
      "text": " right was a joke right right okay well anyway i mean whatever your lenience the point is is that uh it does seem possible uh that you could do that uh maybe maybe not i don't know um another uh possible worrying thread is what i'm i call polarizing ai we might think of polar polarizing ai could be a weaponized ai that is actually"
    },
    {
      "end_time": 1044.224,
      "index": 39,
      "start_time": 1018.029,
      "text": " constructed to push people to the extremes on certain issues. That is, divide people in a certain way in the way that, for example, Mark and I have been talking in breaks about how the Russia's Internet Research Agency has been so effective at doing. You can imagine working with certain platforms to try to get that done."
    },
    {
      "end_time": 1061.015,
      "index": 40,
      "start_time": 1045.162,
      "text": " This could, you know, in a sense, you might say, well, this is just a sort of byproduct of weaponization. But there's another way that this could happen. And that's, I think, possibly, I don't know, but seems to me that is something worth taking seriously, that is that this could happen organically."
    },
    {
      "end_time": 1089.906,
      "index": 41,
      "start_time": 1061.92,
      "text": " We already know that everything we access on the internet, almost, is personalized. All the news that comes down our Facebook feed, all the ads that we face when we're reading the New York Times or Fox News or what have you, all these are personalized to fit our personal preferences and our past history online and off. And that's fantastic when you're trying to figure out what to watch tonight."
    },
    {
      "end_time": 1118.985,
      "index": 42,
      "start_time": 1090.401,
      "text": " Right. Or what books to buy. It's awesome. It's not so fantastic as we all know when you're hunting for facts, because when you're only getting the facts in your, uh, in your searches and on your social media feed that fit your preexisting preferences, that's not a recipe for bursting your bubble. It's a recipe for hardening it. So that we already know is happening in the internet, right? Now imagine our responses on the chat bots."
    },
    {
      "end_time": 1147.739,
      "index": 43,
      "start_time": 1119.172,
      "text": " become personalized to that degree. Super helpful. Maybe, right? So something to think about, no doubt. And again, on all these, there's people in the room, Scott, who knows a lot more about this stuff than me. It's just to make sure that Scott's paying attention. I'm going to mention his name every five minutes. Okay, because God knows the content's not going to do it. All right."
    },
    {
      "end_time": 1171.288,
      "index": 44,
      "start_time": 1148.268,
      "text": " Such a jerk. I am a terrible person. So the third problem is another familiar problem, the problem of the self-poisoning well, which is that we already know that a ton of stuff on the internets is generated by the AIs. We don't know how much is actually generated by the AIs. There's a paper that was"
    },
    {
      "end_time": 1200.418,
      "index": 45,
      "start_time": 1172.807,
      "text": " As I'm published this talk going around saying it's like 50%. I don't know. And of course, what do we mean by AIs? If we mean just algorithms and like all of it. So in any event, there's a lot of stuff, an increasing amount of stuff on the web already generated by AI. Some of that is infected by polarization, weaponization, and the aforementioned in the previous talk, hallucination. That is the compensated propensity for AI to sometimes make things up like human beings."
    },
    {
      "end_time": 1230.35,
      "index": 46,
      "start_time": 1201.118,
      "text": " And that may mean, in other words, that AI is poisoning the well from which AI drinks. It scrapes the internet, it gives back to the internet, scrapes the internet, gives back to the internet. And it could be that things are degrading. Don't know whether that's the case, not saying it's inevitable, but it's something to think about. The one that I'm actually really interested is the problem of trust collapse. This is my own term for a phenomenon that a lot of us have been worried about."
    },
    {
      "end_time": 1260.555,
      "index": 47,
      "start_time": 1230.862,
      "text": " And that is to the extent that the problems like one to three occur, it seems that we could suffer trust collapse in the sense that people will start to, if they become partly aware of these problems or even hear about these problems, or even just experience things in light of these problems without realizing that the problems are going on, they could get to a point where they start to trust less the information they're receiving."
    },
    {
      "end_time": 1290.23,
      "index": 48,
      "start_time": 1261.22,
      "text": " that they get to a point where they are no longer sure what to believe. So if that's the case, then it may be that consulting experts is going to be more problematic. People will become less trusting of them, but they might also become less trusting of the AISR. That is, they might just start to trust collapse. The other thing to keep in mind is that this process of trust collapse can be weaponized itself."
    },
    {
      "end_time": 1320.657,
      "index": 49,
      "start_time": 1291.101,
      "text": " and is being weaponized. The very idea that some people are trusting less can in fact be used by people to get them further confused. For example, by claiming that some footage that was actually taken of you was generated by AI as a candidate for public office in this country recently did."
    },
    {
      "end_time": 1345.384,
      "index": 50,
      "start_time": 1321.715,
      "text": " Right. Something actually this particular person did something was film doing that thing, those public remarks, and then later claimed that could be AI. Okay. It could have been, I guess. I don't know. And that's the point. I don't know. People start to worry about what to trust. That could be a problem."
    },
    {
      "end_time": 1374.65,
      "index": 51,
      "start_time": 1346.903,
      "text": " Now, I talked about AI and social integration. I gave you these examples first before telling you what I meant, partly just to provoke your imagination. What I'm trying to say here is that plausibly the threats raised by the Four Horsemen of the Epistemic Apocalypse get worse to the extent to which the AISR is widely adopted, and not just by individuals, but by governmental institutions, educational institutions, and corporate institutions."
    },
    {
      "end_time": 1392.824,
      "index": 52,
      "start_time": 1375.691,
      "text": " And to the extent to which AISR is normatively embedded. And by that I mean, to the extent to which it's sanctioned by and encouraged, its use is sanctioned and encouraged by those institutions. At least that's my hypothesis."
    },
    {
      "end_time": 1416.834,
      "index": 53,
      "start_time": 1394.104,
      "text": " So can we mitigate these risks? Well, yes, I really think we can. I hope so. And I know that lots of people in this room are worried about that. And some of you, Scott, are actually trying to mitigate them, work on it. Hear that sound?"
    },
    {
      "end_time": 1443.729,
      "index": 54,
      "start_time": 1417.619,
      "text": " That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the internet's best converting checkout, making it 36% more effective than other leading platforms."
    },
    {
      "end_time": 1469.855,
      "index": 55,
      "start_time": 1443.729,
      "text": " There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone"
    },
    {
      "end_time": 1495.64,
      "index": 56,
      "start_time": 1469.855,
      "text": " of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at shopify.com slash theories, all lowercase."
    },
    {
      "end_time": 1524.872,
      "index": 57,
      "start_time": 1495.64,
      "text": " Go to shopify.com slash theories now to grow your business no matter what stage you're in shopify.com slash theories. But it's still going to be the case that. Even if we can get a I'm more reliable in principle or more reliable with regard to certain sorts of questions that the four horsemen are still going to threaten our use of it in all sorts of contexts. Particularly in context which have social and political resonance."
    },
    {
      "end_time": 1554.701,
      "index": 58,
      "start_time": 1525.333,
      "text": " And as we all know, that's a tiny little context, right? Because nothing's politicized in this context, right? Nothing. It's not like the coffee we drink, the cars we drive, the clothes we wear, the things we say, none of that can ever be politicized. The political and social realm, it's a tiny little realm, very self-contained with neat borders. So I think that's an issue with our collective epistemic agency. And now I want to turn to the other problem that I mentioned."
    },
    {
      "end_time": 1578.66,
      "index": 59,
      "start_time": 1555.009,
      "text": " So the first problem was the problem of, well, could there be factors like the Four Horsemen of the Epistemic Apocalypse that could undermine our ability to use AI reliably? But there's another way we can evaluate our use of tools, another way we can evaluate our use of tools, and that is in terms of our ability to use those tools reflectively. So now I'll give you the definition first and then give you some examples."
    },
    {
      "end_time": 1605.913,
      "index": 60,
      "start_time": 1579.206,
      "text": " So I'm going to claim that you use a tool reflectively to the extent that, you know, you confidently use it to generate the right results, but you also understand its limits. You understand to some extent how it works and you're, you care about using it effectively. You actually have a sort of attitude of giving a crap. Okay. Now notice I define this as a matter of degree, right? Everything I've done so far has been a matter of degree. That's a, that's a choice. I'm not saying it's."
    },
    {
      "end_time": 1631.681,
      "index": 61,
      "start_time": 1606.135,
      "text": " You use it reflectively like a button that goes on and off. It's an extent, the extent to which you use it. So I'll give you some examples. Almost everybody here, I hope, can use a screwdriver pretty reflectively, right? You know how to turn it, right? You know, righty-tighty, right? You know that it's not particularly effective as a saw, right?"
    },
    {
      "end_time": 1660.435,
      "index": 62,
      "start_time": 1632.09,
      "text": " You can defend why the Phillips head is the better one to use in this situation, and you care, at least when you're using it, about getting the job done. Okay, so we can do that. On the other hand, there can be situations in which we can be trained to use a tool and be trained to use it mechanically. So, for example, someone like myself might be trained to use a metal detector, right? Just trained to use a metal detector."
    },
    {
      "end_time": 1690.401,
      "index": 63,
      "start_time": 1661.118,
      "text": " A handheld one without knowing particularly how it works, without knowing its limits, like how much metal it can detect and wear, and without really caring whether it's effective, like, because it's not my job to care about that. I'm just worried about, I'm just doing the thing they told me to do. When that happens, I'm going to say that we're using a tool, not reflectively, but to a greater degree, more mechanically, which is to say that"
    },
    {
      "end_time": 1717.79,
      "index": 64,
      "start_time": 1690.811,
      "text": " Rather than the tool becoming an extension of us, when we use tools less reflectively, we become extensions of them. Okay. I think pretty obviously agency, at least in my opinion, also increases to the extent that we use our tools, not reliably, but reflectively. The person is able to reflectively use a tool."
    },
    {
      "end_time": 1747.073,
      "index": 65,
      "start_time": 1719.445,
      "text": " can get more stuff done intuitively and more effectively than the person who's just, you know, waving the metal detector and not giving crap. Agency decreases, we might think, to the extent to which we aren't able to use it reflectively. And again, these are matters of degree. So can we use AISR reflectively? Well, sure we can. Of course we can. But there are some barriers to us being able to do so"
    },
    {
      "end_time": 1773.319,
      "index": 66,
      "start_time": 1747.551,
      "text": " in a way that when it becomes socially integrated, that is barriers to the general use of it reflectively. One reason is that we use it reflectively to the extent to which we use it competently to help us generate true beliefs, but we understand its limits and we can defend it as reliable. But of course we can't defend it as reliable if we can't in a particular context use it reliably."
    },
    {
      "end_time": 1802.381,
      "index": 67,
      "start_time": 1774.241,
      "text": " Nor can we defend it as reliable in a particular context if we're not sure whether that's a context in which we, not Dave Chalmers, who could use it reliably in any context, but we, right, can use it reliably in that particular context. And the Four Horsemen of the Epistemic Apocalypse already have shown us that we have worries about being able to use it reliably. So if we can't use it reliably, then we're not going to be able to defend it as using it reliably. So therefore, we're not going to be particularly reflective in this sense in using it."
    },
    {
      "end_time": 1808.882,
      "index": 68,
      "start_time": 1803.916,
      "text": " Second problem. Capacity. This is pretty obvious."
    },
    {
      "end_time": 1836.34,
      "index": 69,
      "start_time": 1809.974,
      "text": " There's a sort of explainability problem with any black box AI, but certainly with LLMs, it's difficult to know why exactly, I mean, we can know why they work in sort of the big picture, but why they generate those particular results can be awful hard for us to understand, particularly if you're just straightforward civilians like myself, right? Who unlike people like Scott and Dave and many other people in this room, you know, I can barely, you know, do addition."
    },
    {
      "end_time": 1858.2,
      "index": 70,
      "start_time": 1836.8,
      "text": " Secondly, here's a paper from Sean Bender, who I disagree with some of their stuff, but this is an interesting remark from 2022. They say,"
    },
    {
      "end_time": 1884.548,
      "index": 71,
      "start_time": 1860.657,
      "text": " AISR synthesizes results from different sources and it masks the range that is available rather than providing a range of sources in which the user can as it were play around in. So intuitively they seem to be suggesting that by just having a single authoritative expert response then you get the same problem you get when you just consult"
    },
    {
      "end_time": 1914.121,
      "index": 72,
      "start_time": 1885.247,
      "text": " Uh, an expert, you might get the right results, but of course your own epistemic agency is in a sense being handed over right to the expert. That's not necessarily a bad thing in lots of contexts, but it does. That's evaluated was a good or bad thing is right now what I'm not doing. I'll get to that in a minute. I'm just asking, does it, in what ways might it hinder epistemic agency? And this seems to hinder in the reflective sense. And then there's this"
    },
    {
      "end_time": 1941.578,
      "index": 73,
      "start_time": 1914.855,
      "text": " This other issue about social integration. And this is a, this is a point that I'm going to make, which is, you know, I'm not as sure of, uh, not, you know, I'm not certain of any of this stuff. I'm saying that's the moment in which we live. But I think again, that the more we widely adopt and socially embed that is normatively embed these tools, the less reflective we might become with them."
    },
    {
      "end_time": 1970.026,
      "index": 74,
      "start_time": 1942.619,
      "text": " And I'll give you a brief thought experiment to back that up. Call this the Delphi problem. So imagine a society that consults an oracle with regard to what to believe. So whenever they have a problem about what to believe, they consult the oracle. Now imagine the society does this over a period of generations."
    },
    {
      "end_time": 2001.681,
      "index": 75,
      "start_time": 1972.722,
      "text": " They ritualize the performance. They normatively embed it. That is, their institutions encourage people to consult the Oracle whenever they're figuring out the answer to a question. Like in school, consult the Oracle. Learning math, consult the Oracle. Writing stories, consult the Oracle. So it becomes ritualized. It becomes normatively embedded. It becomes habitualized. It's just what we do. After a while, people might even forget about why they did it."
    },
    {
      "end_time": 2032.483,
      "index": 76,
      "start_time": 2003.951,
      "text": " They're getting mostly good answers. Sometimes they don't know that they're getting good answers. Sometimes they do. Depends on the question."
    },
    {
      "end_time": 2063.968,
      "index": 77,
      "start_time": 2034.462,
      "text": " Because some questions, it's hard to verify what the Oracle says. But imagine they do. Well, to some extent, you might say, yay, then their epistemic agency is increasing, right? They've got a reliable tool. But in another sense, we might think they're missing something. They're missing that reflectiveness. They may even be missing the motivation to care about the reliability after a certain point, or at least with regard to some questions. That's the sort of worry that I have."
    },
    {
      "end_time": 2095.486,
      "index": 78,
      "start_time": 2065.828,
      "text": " Well, the implication, again, I said this is implicit in the talk, I'll bring it to the surface for a moment. I think epistemic agency is an important part of democratic practice. I think that when we engage in democratic practice, we ideally treat the political space, to borrow a phrase by the"
    },
    {
      "end_time": 2123.865,
      "index": 79,
      "start_time": 2095.947,
      "text": " and hijack a phrase really from the philosopher Wilfred Sellers, we treat that political space as a space of reasons, or we should. That is when we engage in democratic practice, truly democratic practice, we try to engage in it as people who are trying to do the right thing together with other people and are trying to figure out what to believe together with other people."
    },
    {
      "end_time": 2154.121,
      "index": 80,
      "start_time": 2125.247,
      "text": " Democratic practice, if understood as a space of reasons, just is a space where we treat each other, or should, as equal moral agents and as equal epistemic agents in one basic sense. Not that we treat each other as equally good, I certainly don't think that of other people in my space all the time, and they don't think of it of me, nor do I think, am I saying that democratic practice requires us to treat each other as equally epistemically, that is that"
    },
    {
      "end_time": 2181.698,
      "index": 81,
      "start_time": 2154.548,
      "text": " Treat each other as if we all know the same things, because we obviously don't. What it does though, require of us, is to treat each other in the sense as being equally capable of doing something. Equally capable of coming up and making our own minds up about something. To the extent to which a political environment starts to treat people within that environment as not capable of making up their own minds."
    },
    {
      "end_time": 2209.565,
      "index": 82,
      "start_time": 2182.585,
      "text": " And so therefore maybe making up the minds for them to that extent that that environment becomes less democratic. So I think the punch line here is that epistemic agency is important for democracy. So when we worry about epistemic agency, we are to some extent, or should be worried about democracy. Okay, that's a that's a whole book in there."
    },
    {
      "end_time": 2237.978,
      "index": 83,
      "start_time": 2210.23,
      "text": " Coming out next year, but never mind about that anymore. That's a sketch. Here's a couple of objections. I mean, one objection you're going to no doubt raise again to me anyway is, well, gee, right, Lynch. Okay, fine. But can't we still use AI and AISR reliably and perhaps reflectively in some domains? And that is in some questions, and can't some people do it? And I want to say yes."
    },
    {
      "end_time": 2267.466,
      "index": 84,
      "start_time": 2238.473,
      "text": " As I've already said, yes, we can. Hallelujah. Isn't that awesome? Great. I use it. I hope sometimes I'm using it reliably and reflectively, although again, I'm not so, you know, you know, who knows. The question I was asking, though, was not that question. The question I'm concerned about is the question of what happens when these tools become normatively embedded and widely adopted."
    },
    {
      "end_time": 2288.183,
      "index": 85,
      "start_time": 2267.875,
      "text": " That's the question that I was worried about. Another thing I might add here, this isn't on the slide, but just in light of previous discussions, you'll notice here that I have not said one word until now about whether I think AI LMS are"
    },
    {
      "end_time": 2317.09,
      "index": 86,
      "start_time": 2289.275,
      "text": " Have beliefs themselves, whether they can't, I haven't said, I haven't talked about them themselves, them being epistemic agents, they could be, you know, that's another question. It's a whole nother question. Are they epistemic agents? Right? Either, and are they reflective ones? Are they reliable ones? That's a different question. In this talk, I've been just, and we'll continue, I'm just agnostic about that. I don't know. Don't know the answer to that question."
    },
    {
      "end_time": 2343.063,
      "index": 87,
      "start_time": 2317.466,
      "text": " I'm interested in the answer, I just don't know the answer. What I am interested here though is how our use of these things as tools, if that's what they are, and not agents themselves. Our use of LLMs as tools in a particular way, how that affects our agency. That's an immediate problem. Another objection you might raise is what I call the same old same old."
    },
    {
      "end_time": 2369.172,
      "index": 88,
      "start_time": 2343.763,
      "text": " I mean, after all, you might point out, and correctly, that the four writers of the Epistemic Apocalypse, we've seen them come thundering into view with other technologies. It's not like there aren't other technologies that raise the problem of weaponization, polarization, yada, yada, yada, yada, trust collapse, right? I mean, yeah, right? Writing. So sometimes people will say to me, you're just acting like Socrates."
    },
    {
      "end_time": 2392.961,
      "index": 89,
      "start_time": 2369.753,
      "text": " Back when Socrates snarled at the possibility variety, which he did. Maybe it was Plato. Actually, you know, Plato, Socrates, hard to tell apart on these things. But, um, the point is, is that, yeah, I sort of am being grumpy like that. That's what I'm doing. Yes. But the fact that we've seen a problem before does not mean it's not a problem. Okay."
    },
    {
      "end_time": 2416.254,
      "index": 90,
      "start_time": 2393.558,
      "text": " So the fact that, yes, these problems have emerged before with other epistemic and informational technologies. Okay, but they might be emerging again and we should pay attention to them. What we need to ask is not only what can this tool get us, but what is it going to do to us?"
    },
    {
      "end_time": 2446.408,
      "index": 91,
      "start_time": 2421.834,
      "text": " So I want to end on that note, echoing something that Scott said a couple hours ago. I think the human epistemic condition is inherently fragile. As it turns out, I think we're actually not particularly effective, not particularly reliable, not particularly reflective epistemic agents ourselves."
    },
    {
      "end_time": 2477.637,
      "index": 92,
      "start_time": 2449.548,
      "text": " As Kant said, we're constructed from very crooked timber. And it seems like that's a relevant thing to keep in mind. When we consider widely adopting and normatively embedding these sorts of technologies, because actually, I think, because we as individuals are such ineffective epistemic agents much of the time, particularly with regard to things that are of social and political relevance,"
    },
    {
      "end_time": 2509.343,
      "index": 93,
      "start_time": 2479.906,
      "text": " Because we're clouded with bias and so forth. We need to promote and protect those institutions and practices that encourage reflective truth seeking and epistemic agency. That encourage epistemic agency. I mean, I think that the more AI or I worry, I might say, that the more we incorporate"
    },
    {
      "end_time": 2538.37,
      "index": 94,
      "start_time": 2509.565,
      "text": " Hear that sound?"
    },
    {
      "end_time": 2565.435,
      "index": 95,
      "start_time": 2539.326,
      "text": " That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms."
    },
    {
      "end_time": 2591.527,
      "index": 96,
      "start_time": 2565.435,
      "text": " There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone."
    },
    {
      "end_time": 2614.906,
      "index": 97,
      "start_time": 2591.527,
      "text": " of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothy's, and Brooklyn. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at Shopify.com"
    },
    {
      "end_time": 2625.162,
      "index": 98,
      "start_time": 2614.906,
      "text": " Go to shopify.com."
    },
    {
      "end_time": 2654.718,
      "index": 99,
      "start_time": 2625.998,
      "text": " This episode is brought to you by State Farm. Listening to this podcast? Smart move. Being financially savvy? Smart move. Another smart move? Having State Farm help you create a competitive price when you choose to bundle home and auto. Bundling. Just another way to save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state."
    },
    {
      "end_time": 2694.019,
      "index": 100,
      "start_time": 2669.735,
      "text": " Thank you, Michael, for your talk. Questions? Thank you for your talk. I have a question, right? So you talk about the epistemic condition and that implies to me at least some ethical component. For me, the whole process of discovering or finding knowledge and choosing to believe something is a choice. There's a lot of activity that goes into it. You have to weigh options and"
    },
    {
      "end_time": 2720.589,
      "index": 101,
      "start_time": 2694.394,
      "text": " and come to certain conclusions. At what point do you think that we'll be able to program artificial intelligence with the capacity to make the kind of decisions that we make in our day-to-day lives? And do you think that it will take that much more development for them to make, for artificial intelligence to make more effective decisions than perhaps we can make because it's able to compute more factors at the same time?"
    },
    {
      "end_time": 2751.271,
      "index": 102,
      "start_time": 2722.79,
      "text": " Not an expert on the technology of AI, as everyone who is in this room can tell. But from the people that I talked to, and certainly lifting into Scott earlier today, my own sense is that we can already use AI to help us make decisions that we make every day. And in some cases, we have, due to work of AI safety experts, we have installed"
    },
    {
      "end_time": 2778.541,
      "index": 103,
      "start_time": 2751.715,
      "text": " Certain guardrails to make that, you know, make it more difficult for us to ask certain sorts of questions. But, uh, I mean, it certainly seems to be possible. Uh, I don't see it. I mean, again, I'm not an AI expert. Certainly seems worth thinking about. Let's put it that way that, you know, when we're going to start using AI as therapy therapists, I know people are working on that already. Right. And you know, uh, another way of another"
    },
    {
      "end_time": 2808.643,
      "index": 104,
      "start_time": 2778.933,
      "text": " example that I think about is, is this one. One thing humans beings, I don't know about you guys, but you might've noticed that human beings in general, not anybody here in this room, but you've heard people find, you've heard that, you know, human beings are often make bad parenting decisions, right? You've heard that, right? There are some people out there that sometimes make bad parenting decisions. Now imagine a use of AI as, as a, you know, a parenting consultant."
    },
    {
      "end_time": 2831.715,
      "index": 105,
      "start_time": 2809.172,
      "text": " And now imagine, because I'm a philosopher and I'm not held prisoner by the facts, imagine that a society starts to think, well, actually, AI isn't perfect at the parenting decision, but they all have a lot better than the average person, so why don't we just have our kids raised by little AI, maybe put them in things like this?"
    },
    {
      "end_time": 2855.23,
      "index": 106,
      "start_time": 2832.039,
      "text": " That way we can all kick back. And I mean, if you are a parent, particularly of a, let's say three year old who hasn't perhaps wished for AI parent may need to come along, right. Uh, and entertain your kid. I don't think that would be a society I'd want to live in. Um, I know I'm riffing off your question. Sorry. I apologize. I hope I that helped. It's okay. Thank you."
    },
    {
      "end_time": 2882.824,
      "index": 107,
      "start_time": 2855.742,
      "text": " Yeah, thank you. Great talk. Uh, so I would like to go back to the same old, same old objection. Uh, if actually your point is the same of, uh, of the case of other tools, uh, because it seems, I don't know why, uh, I, I don't use my brain anymore for make calculations, just very, very little ones. Probably when I got 80 years old, I'll be completely,"
    },
    {
      "end_time": 2911.988,
      "index": 108,
      "start_time": 2883.746,
      "text": " Bad in mathematics. I don't feel as if I'm an extension of my calculator, although probably I became, I will become. So I don't know, I think you probably want to raise a point that is different from the other technologies. So what is this specific difference that make us the risk of becoming an extension of those tools that I don't see the same in other types of technologies?"
    },
    {
      "end_time": 2938.439,
      "index": 109,
      "start_time": 2912.227,
      "text": " That's a great point. I agree with you that I'm a little farther along than you. I just use calculators for everything. Well, maybe not one plus one, but once it gets past that, it's too hard for me. Like you, that doesn't make me feel like less of an epistemic agent. It doesn't make me feel more of an extension of the calculator."
    },
    {
      "end_time": 2967.159,
      "index": 110,
      "start_time": 2939.172,
      "text": " But we may disagree. I think you actually, in the sense I was trying to explain and no doubt in a metaphorical, not particularly precise sense, I was, I think to some extent I am. I have no law. I'm a lot more like the person I imagined with regard to calculators that, you know, just, just, uh, uses the metal detector mechanically. Now the difference between me and the person I was imagining is I actually care about getting the right answer."
    },
    {
      "end_time": 2996.783,
      "index": 111,
      "start_time": 2967.432,
      "text": " When I'm calculating the tip, I want the right answer from my calculator. So to that extent, I am using it reflectively, right? Remember, reflectiveness comes in degrees and it has various components and you could be good at one component and not the other one, right? Like the calculator, I don't know how it works by magic, I think. So the idea, the metaphor of becoming an extension of a tool rather than it becoming an extension of us is a metaphor, but it's also meant to be something that comes in degrees."
    },
    {
      "end_time": 3019.514,
      "index": 112,
      "start_time": 2997.261,
      "text": " Now, I don't deny that there are also going to be differences, and I thank you for asking this, between the calculator and AISR. Obviously, there is. One has to do with scale. One has to do with the nature of the technology itself, the most obvious being that it can produce results that are the sorts of results that I would"
    },
    {
      "end_time": 3041.561,
      "index": 113,
      "start_time": 3020.026,
      "text": " And this is echoing something Scott said, which I often say myself, which is that it produces results that I would judge to be the results of a human, right, were an idea in a different context, not sitting down my computer, actually knowingly talking to chat on GPT-4. That I think does make the tool different."
    },
    {
      "end_time": 3071.869,
      "index": 114,
      "start_time": 3042.244,
      "text": " It makes the tool different for all sorts of reasons. It raises questions that are similar to my consulting, and this is by design, my consulting an expert. This is why I raised the question of the oracle. If we think about these things as oracles, which they're not, but if you think about these things like the society was thinking about the oracle, there you might have said, imagine a society that has a bunch of experts, an expert panel."
    },
    {
      "end_time": 3101.049,
      "index": 115,
      "start_time": 3072.346,
      "text": " Right. On an everyday decision of what to believe, it consults the expert panel. All sorts of ways. That's a good thing, depending on what it is that we're consulting, right? If it's a medical issue, it's a, you know, it's a issue about the climate. I think consulting experts is the right way to go. What I'm, what I'm suggesting is that even in the consultation of experts, we've got to be aware that there is a sort of, uh, we're, we're taking our epistemic agency and handed it off to somebody else."
    },
    {
      "end_time": 3127.773,
      "index": 116,
      "start_time": 3101.544,
      "text": " Sometimes that's a good thing. In the calculator case with me, it's a good thing, but it's not necessarily always a good thing. And even if we thought it was always a good thing, the Four Horsemen of the Epistemic Apocalypse, I suggest, actually suggest that we're going to have problems doing that in a reliable way. A great question. I'm sorry. I can't be better than that. Yeah. Thank you, Michael, for the talk. I actually have two questions and I've been oscillating between which one to ask."
    },
    {
      "end_time": 3153.2,
      "index": 117,
      "start_time": 3127.892,
      "text": " So I'm going to stick with this one, actually. The easy one, the one that's easy for me to answer. Actually, it's actually a clarification about how you're kind of just defining an epistemic tool in this situation at the very early slide. And I was just wondering, because it looked like in there it had, at the very end of that definition, epistemic tool had something to do with producing beliefs or knowledge. Yeah. And"
    },
    {
      "end_time": 3173.234,
      "index": 118,
      "start_time": 3153.916,
      "text": " I hope this isn't pedantic. In some ways, I wonder if it's not important to make a distinction between epistemic tools, right, which are used by what we typically associate as being humans of epistemic agency, which produce some kind of epistemic output, right? Beliefs, knowledge of it might be, and then epistemic producers, right? And it seems like to a large extent, right?"
    },
    {
      "end_time": 3203.763,
      "index": 119,
      "start_time": 3174.036,
      "text": " A lot of people see large language models as kind of epistemic producers. It's telling me something that would typically associate as being belief or knowledge expressed by a human. It's unfortunate that it's so good at language because that's how we express epistemic statements or propositions that we can evaluate for truth. But I'm just wondering if it's in some ways the definition there makes it look like epistemic tools are part of the process of generating belief or knowledge, which is true, but it also makes it sound like they're generating it. But it seems like this is actually a distinction."
    },
    {
      "end_time": 3233.933,
      "index": 120,
      "start_time": 3204.036,
      "text": " And what we're looking for when we're creating, say, AI or AGI, we're looking for those things which are epistemic producers in their own right that have that epistemic agency. And it feels like in that situation, that's the production, right? Or that's producing producer knowledge. And that way it could be kind of pedantic. Maybe you're just like, actually, I meant to say the second thing. No, this is helpful. I think this is helpful. I was not claiming, uh, that the chatbots are, are epistemic agents. They may be."
    },
    {
      "end_time": 3261.22,
      "index": 121,
      "start_time": 3234.275,
      "text": " Right. I see your point that when we're using an epistemic tool, maybe this will help. I think it's I like your way of putting it. When we're using an epistemic tool, we're engaged in a process, the process of using the tool, and also our own whatever our own cognition, if any, in relation to that, right, which may not be much right, like in the like, in my case, with the calculator, right, no cognition, empty blank slate, right. It"
    },
    {
      "end_time": 3288.626,
      "index": 122,
      "start_time": 3262.722,
      "text": " The process that we're engaging in, I'm claiming is one in which were the goal of the process is to generate an epistemic output. Okay. Um, I'm remaining neutral on whether the AI itself is it itself has epistemic outputs in the sense which I'm using that term. That is, as you correctly noted, beliefs."
    },
    {
      "end_time": 3307.858,
      "index": 123,
      "start_time": 3289.667,
      "text": " So for it to have epistemic being an epistemic agent on my account, it would have to be capable of deciding what to believe based on reasons that would require it to have beliefs and other things. Do they have beliefs? I don't know. I don't know. Like I literally don't know."
    },
    {
      "end_time": 3332.261,
      "index": 124,
      "start_time": 3308.353,
      "text": " Sometimes I think they might. I mean, it depends on what you mean by belief, right? If you're Dan, I think this is a time in which the instrumental stance, right? Dan Dennett's instrumental stance, if you took that, I mean, the intentional stance, excuse me, it's an instrumentalist position and the intentional stance, the intentional stance where you're, you know, things have beliefs in so far as you take a certain stance towards them. Well,"
    },
    {
      "end_time": 3360.401,
      "index": 125,
      "start_time": 3333.234,
      "text": " that's starting to look to me like a plausible stance to take up with regard to AI in some contexts. But I don't know enough yet to feel like that is whether that's warranted or not. So I remain neutral. Eternalism versus externalism about content in philosophy of mind could be an interesting distinction as well when it comes to belief. Yeah, absolutely. I mean,"
    },
    {
      "end_time": 3387.585,
      "index": 126,
      "start_time": 3360.759,
      "text": " Questions of what content is? Right now, all I can say is what we've already said, which is, do the generated states have content? Well, in the following sense, their answers, that is the strings of text, are such that we take them to express propositions in the language in which we are interpreting them."
    },
    {
      "end_time": 3417.944,
      "index": 127,
      "start_time": 3388.285,
      "text": " I don't know the answer to any of those questions, or any other probably. These last couple of questions actually"
    },
    {
      "end_time": 3447.073,
      "index": 128,
      "start_time": 3418.166,
      "text": " covered some of the things I wanted to ask, which is pretty cool. But the thing that I'm pretty concerned about, philosophically especially, is this kind of dependence that we're having on all of these tools that we make in the sense that what used to be an extension of us, they're almost starting to use us now as tools in a sense. I was talking earlier about how plants and all these different things are essentially using us to propagate. So I wonder, in terms of how we're trying to replicate a lot of human cognitive capabilities with AI,"
    },
    {
      "end_time": 3475.947,
      "index": 129,
      "start_time": 3448.029,
      "text": " and computation. What's the minimum amount of tools, regardless if it's technological, not whatever words you want to ascribe to it? Why are we not more focused on figuring out the most independent way to increase our own abilities? There are people out there that have extraordinary creative artistic abilities. There's savants, you've probably heard of them. They have immense ability to calculate."
    },
    {
      "end_time": 3504.155,
      "index": 130,
      "start_time": 3476.271,
      "text": " You know, that would give a lot of people a run for their money of what they can quickly, you know, put in their calculator. So I'm just kind of interested in why haven't we started to look more into that in terms of, of changing our output as opposed to just having machines do it. Hopefully I said that well. Yeah, I don't, I don't know. I mean, I, I mean, a couple of things I would say is, you know, it does seem to me that a lot of the people who have been interested in"
    },
    {
      "end_time": 3531.817,
      "index": 131,
      "start_time": 3504.599,
      "text": " Socially integrating AI, the sort of AI we're talking about, are in good faith actually interested in helping us become better epistemic agents. I mean, right? I mean, I think you would agree. I mean, like, I'm not impute. Neither of us are impugning. I mean, there's some people are going to have bad intentions. Some people are going to have the intention only to make money, but other people are in good faith trying to help us become better epistemic agents. And to some extent, I think they're being wildly successful."
    },
    {
      "end_time": 3562.244,
      "index": 132,
      "start_time": 3532.295,
      "text": " I think you're, you're with that qualification. I think your note about, well, another way to think about this is about approach. These sorts of problems is to try to figure out how to make human beings more productive on their own, how to become more creative people, how to, how to scale up creativity. That would be a really cool thing. Uh, if we could do that. Haven't figured out yet how to do it. Sorry. Uh, but."
    },
    {
      "end_time": 3590.077,
      "index": 133,
      "start_time": 3562.654,
      "text": " Hi. In the last three months, I've been a substitute teacher at middle school. This has been quite an experience for me. Thank you, sir, for your service. I appreciate that. Seriously. There you go. All right. And what I have learned is that the students there do not know how to do anything. They know how to get an answer"
    },
    {
      "end_time": 3619.633,
      "index": 134,
      "start_time": 3590.384,
      "text": " They do not know how to develop that answer. And that is definitely coming from their ability to search and to find answers in other ways. And I just wondered how that fits into here about this knowledge base, the knowledge versus the answer versus how to get to an answer. The most dramatic one was to me was when I was conducting band, which is something I really love doing. And I got to a certain point and the student says, well, the teacher hasn't told us how to do it. And it was just the same notes"
    },
    {
      "end_time": 3646.442,
      "index": 135,
      "start_time": 3620.247,
      "text": " that they had been playing before almost. So it's the same thing. How, how do you do it versus what is it? Right. I think this is something that we've all been worried about with education since we, we, we, the idea of widespread education became socially integrated, which is how to do it at scale in a way that actually"
    },
    {
      "end_time": 3676.971,
      "index": 136,
      "start_time": 3648.285,
      "text": " Nourishes the creative part of the human being right the part that wants to figure things out That wants to to echo a comment I made earlier today that wants To push the boulder up the hill themselves, right? That isn't just worried about the boulder being on but the top of the hill You know, so yeah, the thing that you're worried about is the thing that I'm worried about with my"
    },
    {
      "end_time": 3704.275,
      "index": 137,
      "start_time": 3677.056,
      "text": " University students that extent to which we might say going back to the same old same old we've been worried about this as I said at the top of my answer to you since the beginning of education the worry is now I think a lot of us have is that that this particular tool is so effective it's so good that sort of questions that we've had with other tools including like just Google search"
    },
    {
      "end_time": 3728.592,
      "index": 138,
      "start_time": 3705.623,
      "text": " Uh, with writing, with books, with calculators, these sorts of questions we had before the scale, independently of there's a difference in, you know, differences. Let me put it this way. Differences of scale that are big enough become differences in kind, which is maybe what I should have said to Claudia. Like what's the real difference? Well, the differences is one of great scale, which eventually becomes a difference in kind."
    },
    {
      "end_time": 3758.712,
      "index": 139,
      "start_time": 3729.36,
      "text": " I mean, the difference between the horse and buggy and the car. I mean, somebody might say, well, what are you getting all worried? It's not that different. It just goes faster. Well, that would be to really underestimate the difference between those technologies. So I think that you're right to be worried about that. And I think we as a society need to start, as Scott was telling us earlier today, we really need to start taking some of these questions very, very seriously right now."
    },
    {
      "end_time": 3788.012,
      "index": 140,
      "start_time": 3759.428,
      "text": " Hi, those are first of all, really interesting talk. Thank you."
    },
    {
      "end_time": 3818.609,
      "index": 141,
      "start_time": 3788.78,
      "text": " I guess, so when you talk about the threats of AI and we talk about epistemic agency and democratic politics, I guess I'm interested in how do you, what's your view on how does that factor in with NICOM's censorship and restrictions on users using these tools and giving the policies that a lot of companies have taken with that, I guess, do you think there should be less or more restrictions or maybe it's okay how it is now or maybe it's not relevant at all? I don't know."
    },
    {
      "end_time": 3840.196,
      "index": 142,
      "start_time": 3819.241,
      "text": " Super relevant question i have been thinking about it don't feel i feel at this point i'm sorry to keep saying this but i think this is a situation where a lot of us here today at this conference have been with regard to ai is that things are moving very quickly and it's really hard to give particularly reflective answers to you know when when you're worried about a moving target."
    },
    {
      "end_time": 3869.514,
      "index": 143,
      "start_time": 3841.459,
      "text": " Hear that sound? That's the sweet sound of success with Shopify. Shopify is the all-encompassing commerce platform that's with you from the first flicker of an idea to the moment you realize you're running a global enterprise. Whether it's handcrafted jewelry or high-tech gadgets, Shopify supports you at every point of sale, both online and in person. They streamline the process with the Internet's best converting checkout, making it 36% more effective than other leading platforms."
    },
    {
      "end_time": 3895.623,
      "index": 144,
      "start_time": 3869.514,
      "text": " There's also something called Shopify Magic, your AI-powered assistant that's like an all-star team member working tirelessly behind the scenes. What I find fascinating about Shopify is how it scales with your ambition. No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Join the ranks of businesses in 175 countries that have made Shopify the backbone"
    },
    {
      "end_time": 3919.002,
      "index": 145,
      "start_time": 3895.623,
      "text": " of their commerce. Shopify, by the way, powers 10% of all e-commerce in the United States, including huge names like Allbirds, Rothies, and Brooklynin. If you ever need help, their award-winning support is like having a mentor that's just a click away. Now, are you ready to start your own success story? Sign up for a $1 per month trial period at Shopify.com"
    },
    {
      "end_time": 3948.456,
      "index": 146,
      "start_time": 3919.002,
      "text": " That said, clearly if we're going to institute these tools on a widespread basis, we need to get better. We need more prompt training, right? If we're going to use them, at least"
    },
    {
      "end_time": 3978.712,
      "index": 147,
      "start_time": 3948.712,
      "text": " should be able to"
    },
    {
      "end_time": 4005.384,
      "index": 148,
      "start_time": 3979.07,
      "text": " Not particularly, not a great, you know, group of punting. I never used a machine gun to shoot things, but I guess some people do now. Um, but we do even, you know, I mean, you don't see people saying, Hey, let's pass a law that will actually do see this, but you don't see many responsible people saying, Hey, let's hand out tanks. Right. Um, uh, you know, people are generally like, Whoa, whoa, whoa, whoa."
    },
    {
      "end_time": 4035.538,
      "index": 149,
      "start_time": 4006.084,
      "text": " Maybe I would tank to me, but not my neighbor, right? I hate that guy. Um, so there are all sorts of things that we do and on the informational sphere, that's certainly the case. I mean, we think about the terrible things that Congress was originally worried about, about child pornography, those sorts of things. I think there's a lot of agreement, right? You know, getting AI to help you build a bomb, right? Is, is a scary thought. In fact, I'm even sorry for mentioning it as it's a trigger warning, right?"
    },
    {
      "end_time": 4063.507,
      "index": 150,
      "start_time": 4035.862,
      "text": " All these things. So I think we're doing as we're doing the best that we can right now. And I think, you know, you can talk to Scott and other people who are AI safety experts to think about what the problems are and what else we should be doing. But I'm not I don't think of this as like, you know, clearly censorship could be an issue at some point. But I don't think that's really the worry that I have right now. Right. Hi."
    },
    {
      "end_time": 4076.578,
      "index": 151,
      "start_time": 4064.087,
      "text": " Thank you for the talk. I think it's great, especially when you mentioned the ability of reflective use of tools. I think it applies particularly to epistemic tools."
    },
    {
      "end_time": 4104.582,
      "index": 152,
      "start_time": 4076.903,
      "text": " But I'm just wondering if you have any practical suggestions or how we can actually make people to use tools reflectively, whether it's by policy regulations, social norm, education, or in any realm that you think, especially in terms of exonity, not exposed. So it's not just that, you know, you use the tools badly and then you get punished. But how do we encourage people to use that? Yeah. Yeah. Great question. Yeah. I mean, I think"
    },
    {
      "end_time": 4132.841,
      "index": 153,
      "start_time": 4105.128,
      "text": " This is the end of the session. Thank God. So I don't have to actually give you a lot of great detail. And again, I want to remind you that I'm a philosopher, not a policy person. So I'm good at pointing out problems, not necessarily solving them. This is truth and advertising people, right? I work in, I'm in business school, so I'm looking for solutions. I know. And thank, I'm glad that you are. I think broadly speaking though, we can give some general solutions that are, that we really need to take more seriously."
    },
    {
      "end_time": 4158.899,
      "index": 154,
      "start_time": 4133.251,
      "text": " Right now in this country and a variety of countries around the world, there are certain institutions that are devoted to the reflective pursuit of knowledge that are under attack. And those are institutions like this one and other ones. And I think right now we need to do a better job of protecting and promoting the work of those institutions."
    },
    {
      "end_time": 4189.309,
      "index": 155,
      "start_time": 4159.684,
      "text": " I think those, these institutions, including my own and other institutions have not helped things themselves. I mean, we're not often very good at sort of marketing as it were our own, uh, contribution to society. Right. Which I think goes beyond just getting people jobs, although that's important part of it, but actually making them into more reflective democratic citizens. I mean, I believe that John Dewey was right. That's the goal of education to get people to be better democratic citizens."
    },
    {
      "end_time": 4219.019,
      "index": 156,
      "start_time": 4191.357,
      "text": " I also think that clearly our ability to transmit information, what we call news, to people in a reliable way has become compromised, as we all know, in recent years. I think that what we sometimes call the news media, the traditional news media, right, has obviously, it's a disappearing, possibly doomed financial model."
    },
    {
      "end_time": 4247.108,
      "index": 157,
      "start_time": 4219.428,
      "text": " for transmitting reliable information. If it is doomed, we need to quickly come up with another model. I have thoughts about that. But it may not be doomed if we could intervene at a societal level to try to promote and protect those institutions. Because I think those are the things, those institutions, the two I just named, together with another institution, the legal system,"
    },
    {
      "end_time": 4259.957,
      "index": 158,
      "start_time": 4247.432,
      "text": " That really are the three pillars that stand between us and the end of democracy, something which, like many of you here, I'm a little worried about. Thank you."
    },
    {
      "end_time": 4282.415,
      "index": 159,
      "start_time": 4267.483,
      "text": " Firstly, thank you for watching, thank you for listening. There's now a website, curtjymongle.org, and that has a mailing list. The reason being that large platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever they like."
    },
    {
      "end_time": 4308.882,
      "index": 160,
      "start_time": 4282.671,
      "text": " That's just part of the terms of service. Now, a direct mailing list ensures that I have an untrammeled communication with you. Plus, soon I'll be releasing a one-page PDF of my top 10 toes. It's not as Quentin Tarantino as it sounds like. Secondly, if you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself"
    },
    {
      "end_time": 4326.203,
      "index": 161,
      "start_time": 4308.882,
      "text": " Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube,"
    },
    {
      "end_time": 4355.623,
      "index": 162,
      "start_time": 4326.391,
      "text": " Which in turn greatly aids the distribution on YouTube. Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, they disagree respectfully about theories and build as a community our own toe. Links to both are in the description. Fourthly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts."
    },
    {
      "end_time": 4375.555,
      "index": 163,
      "start_time": 4355.623,
      "text": " I also read in the comments"
    },
    {
      "end_time": 4399.002,
      "index": 164,
      "start_time": 4375.555,
      "text": " and donating with whatever you like there's also paypal there's also crypto there's also just joining on youtube again keep in mind it's support from the sponsors and you that allow me to work on toe full time you also get early access to ad free episodes whether it's audio or video it's audio in the case of patreon video in the case of youtube for instance this episode that you're listening to right now was released a few days earlier"
    },
    {
      "end_time": 4405.606,
      "index": 165,
      "start_time": 4399.002,
      "text": " Every dollar helps far more than you think either way your viewership is generosity enough. Thank you so much"
    },
    {
      "end_time": 4439.07,
      "index": 166,
      "start_time": 4418.422,
      "text": " Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store."
    },
    {
      "end_time": 4463.439,
      "index": 167,
      "start_time": 4442.79,
      "text": " Ever seen an origami version of the Miami Bull? Jokes aside, Verizon has the most ways to save on phones and plants where everyone in the family can choose their own plan and save. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal."
    }
  ]
}

No transcript available.