Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

Mark Bailey: LLMs, Disinformation, Chatbots, National Security, Democracy

May 28, 2024 30:14 undefined

⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.

Transcript

Enhanced with Timestamps
69 sentences 4,898 words
Method: api-polled Transcription time: 30m 19s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze.
[0:20] Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates.
[0:36] Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
[1:06] This episode is brought to you by State Farm. Listening to this podcast? Smart move. Being financially savvy? Smart move. Another smart move? Having State Farm help you create a competitive price when you choose to bundle home and auto. Bundling. Just another way to save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state.
[1:37] This Black Music Month, State Farm wants to take a moment to appreciate all the ways black music brings everyone together. From the Saturday morning soundtrack that gets the whole family cleaning the house. To the beats at the block parties that bring the whole community together. Celebrate the impact of black music this month and beyond with State Farm. Like a good neighbor, State Farm is there.
[2:07] Disinformation erodes democracy. Democracy requires intellectual effort. I see democracy as being this sort of metastable point if you think of it in some sort of a state space. A state space of different social configurations. And the reason it's metastable is because it takes a lot of effort. It requires an engaged population and it requires an agreed upon set of facts about what's real about the world.
[2:31] Mark Bailey is a faculty member at the National Intelligence University, where he is the department chair for cyber intelligence and data science, as well as being the co-director of the Data Science Intelligence Center. This talk was given at MindFest, put on by the Center for the Future Mind, which is spearheaded by Professor of Philosophy Susan Schneider. It's a conference that's annually held where they merge artificial intelligence and consciousness studies and held at Florida Atlantic University. The links to all of these will be in the description.
[3:00] There's also a playlist here for MindFest. Again, that's that conference, Merging AI and Consciousness. There are previous talks from people like Scott Aaronson, David Chalmers, Stuart Hammeroff, Sarah Walker, Stephen Wolfram, and Ben Gortzel. My name's Kurt Jaimungal, and today we have a special treat, because usually Theories of Everything is a podcast. What's ordinarily done on this channel is I use my background in mathematical physics, and I analyze various theories of everything,
[3:24] Okay, so I want to welcome everybody to our final session of
[3:53] This year's MindFest conference, which apparently I'm moderating even though I've had very little sleep, so I'm sure I'm going to mess up drastically and I apologize in advance.
[4:04] But our first speaker is Mark Bailey, who's a professor at National Intelligence University, wonderful co-author, and he's going to be talking about some of the same issues that Michael Lynch discussed, but from a national security perspective, and he's going to talk for just a short amount of time, and then we're going to have some Q&A, and then Stephen Gupta is going to be speaking about some philosophical issues, and then
[4:33] I'm going to be asking questions of the audience and of course other people I mean the participants here at the roundtable and instead of going around the room for the video and Introducing ourselves one by one which would take all the video I just ask that before you ask your first question or make your first comment just say who you are Okay And of course at the end the audience will have an opportunity to ask questions as well Okay, so let's go ahead and get started
[5:06] Awesome. Thank you so much, Susan. It's so great to be here. I love coming to MindFest. It's really a lot of fun and just a great group of people. So like Susan said, for those of you who don't know me, my name is Mark Bailey. I'm the department chair for cyber intelligence and data science at the National Intelligence University. You heard my dean speak earlier this morning, Dr. Tom Pike. But basically, a lot of people don't really know what the National Intelligence University is. And we're federally funded university.
[5:35] So I sometimes use the analogy of like a service academy. So like a West Point or like a Naval Academy because we serve government personnel. So we serve the intelligence community and military personnel who are adjacent to the intelligence community. So a lot of our work focuses pretty heavily on national security related issues. So my background. So I'm going to talk a little bit here about chat bot epistemology and sort of how that relates to democracy and
[6:04] sort of the erosion of democratic norms. My background is actually in engineering. And so I fancy myself an engineer who plays a mathematician on TV, because I teach mostly math. And I also dabble in philosophy, because I teach a lot on AI ethics as well. And Susan and I publish a lot sort of in that realm. So I will begin here. What I really want to do here is sort of start the discussion, right? So we have this lovely panel here.
[6:33] A lot of what I talk about right now, Michael already covered wonderfully in his previous talk. But I do want to focus a bit on, you know, what I see as the major issues with AI chatbots and sort of how they relate to the erosion of democratic norms. So as we learned earlier, AI is a black box. So oftentimes AI problems are encapsulated into like three main issues. So you have explainability because of this black box issue. AI can be unpredictable. There are a lot of reasons for that.
[7:03] It's hard to understand how you could have a neural network with billions of parameters and then sort of map that to some deterministic function to understand exactly what's going on and why it makes decisions, the way that it does. And because of that unpredictability, sometimes you can have unexpected behaviors that are unaligned with human expectation. So that leads to what we call the alignment problem. So it's the ability to align AI with human expectations or human moral values or how you would want AI to behave.
[7:31] And then by extension, you get into what we call the control problem, which is how do you ensure that you can control AI and ensure that it's aligned with human expectations and does what you want it to do? So there's this AI black box that leads to some of these security risks with chatbots. And we'll talk a little bit about that. AI is also an enabling tool. So we have a lot of issues with the spread of dis and misinformation online on Facebook and Twitter and other social media platforms.
[8:00] And a lot of times that's facilitated by like troll farms who create disinformation because they want to target what I would consider social fissure points within a society that they want to create discord in, right? So they might target specific issues that are divisive in different ways and they create these posts to kind of basically stir the pot a little bit and, you know, create social discord in that way. And so right now humans do that but AI chatbots are going to enable that
[8:30] you know, at a grander scale because it's going to be a lot easier to use things like chat GPT to basically, you know, to create large amounts of information around a specific narrative and then propagate that on social media. And then, of course, if you have AI-empowered chat bots that are trained on this particular narrative, then it sort of compounds that problem. And that also leads back to the AI black box problem. So if you can't really predict how these things are going to behave,
[8:56] There's going to be an added level of uncertainty if you have an AI-driven chatbot that's, you know, propagating and talking, you know, to other people online in this capacity. I'm sure a lot of you remember a few years ago Microsoft had this Tey chatbot which was, and this was pre-GPT, but it was this chatbot that they released on Twitter and like within, I don't know, a few hours it became like this vehement racist and like anti-Semite because Twitter is basically a cesspool of, you know, nonsense.
[9:24] And it learns from Twitter. And so it started to repeat all of these different really terrible things. And so we are going to see more of that, especially with these large language models enabling and empowering these types of devices. And then finally, disinformation erodes democracy. So as I'll talk about in a little bit, democracy requires intellectual effort. I see democracy as being this sort of metastable point if you think of it in some sort of a state space, a state space of like
[9:54] You know, different social configurations. And the reason it's metastable is because it takes a lot of effort. It takes intellectual effort to manage and run a democracy because it requires an engaged population, and it requires an agreed upon set of facts about what's real about the world. And then you can debate policy about those facts, but you can't debate something if you don't agree on what the facts actually are. And so I see that problem sort of being exacerbated by a chat bot, and we'll talk a bit about that.
[10:25] So epistemic breakdown. So we define knowledge as justified true belief. And that's sort of the classical definition of what knowledge is from epistemology. And there's some nuances to that. And maybe that's not the best definition, but we won't go into that here. So even before chatbots, this epistemic breakdown, this breakdown of this justified true belief. And when I say justified, I mean there has to be some sort of a chain of justification
[10:53] you know, that leads to validation of that knowledge. So, you know, you can kind of think of it like in academia, we cite our sources. And you do that because you have to map it back to some chain of reasoning to justify what it is, you know, you want to present. And that breaks down in a lot of ways. But even before these AI-enabled chatbots, this was already evident in social media. You know, we saw, I mentioned earlier a lot of the, you know, disinformation and everything that was propagated by troll farms and whatever else.
[11:21] It sort of breaks down this idea of what knowledge is and creates these echo chambers of unjustified belief and it propagates this dis and misinformation which erodes democracy. And like I mentioned, knowledge discernment requires cognitive effort. And if you're not willing to put or able to put in the cognitive effort to discern what's true and what's not, based on what you read on social media, based on what's propagated by a lot of these different chat bots and whatever else,
[11:51] You know, you don't really, you're not able to really contribute intellectually to try and, you know, to understand and agree upon these facts and make, you know, I would say a valid discernment about the facts that you can then debate policy. So this ends up causing an over-reliance on confirmation bias, a heuristic that leads to unjustified or false belief. And then, you know, in that way these algorithms in some ways promote
[12:19] you know, the amplification of extremism. And then, you know, like I said, as these large language models are integrated into some of these, you know, disinformation opportunities, it's just going to, you know, sort of catalyze this and accelerate this epistemic breakdown. Again, I mentioned the idea of this AI black box. So explainability. So if you have a chat bot that's powered by AI, even if you train it on a particular set
[12:48] a ideological position or something, it may still behave in ways that you don't understand or you don't anticipate. So that's the whole problem with this AI black box. And then of course, you know, sort of this epistemic crisis that you see in democracy, right? So as I mentioned, knowledge determination is critical for a function of a healthy democracy. Yet these LLMs may write the news or be our personal epistemic assistants in different ways, right? So, you know, if you
[13:17] If you rely too heavily on these LLMs, you kind of lose that chain of epistemic justification because you don't always know where the knowledge came from. Because it's essentially interpolated from the training set of these models. So there's no epistemic chain of justification that you can follow to validate the knowledge that you have about whatever you're asking it about. And then, of course, democracy requires an intellectually engaged population. And then more critically, this agreed upon set of facts upon which you can debate policy.
[13:47] And then when this chain of reasoning is broken, that creates this epistemic disintegration. So this has global security implications as well, this erosion of truth. So the erosion of democracy creates opportunities for totalitarian tendencies to take root. So as the capacity of individuals to ascertain truth and productively debate policies grounded in that truth degrades, humans are likely to relinquish their capacity for reasoning about policy to charismatic leaders whose rhetoric may align with
[14:17] biases in the population. So Michael, I think, did a very eloquent way of sort of explaining the fact that the epistemic situation of humans is very fragile. And so it can break pretty easily. And if it breaks, you may be more inclined to rely on these heuristics about how you understand the world. And sometimes those heuristics are things like confirmation bias or any other biases that you may have and internalize. And that may lead you to be more inclined to
[14:45] sort of relinquish your ability to epistemically analyze or make some inference about what's true and what's not to some charismatic leader. And so that leads to a less, I would say, more stable form of social structuring, which would be something that would be more totalitarian, because it takes less effort. It's more energetically favorable in that way. So this degradation of healthy democracies because of this epistemic erosion
[15:16] may create opportunities for the emergence of potentially a new global hegemony built on some authoritarian world view. And you may see countries sort of lurching toward this authoritarian tendency because of this accelerated spread of disinformation and the erosion of our ability to discern what's true and what's not and then debate appropriate policies on that. So thank you so much.
[15:37] Hola, Miami! When's the last time you've been in Burlington? We've updated, organized, and added fresh fashion. See for yourself Friday, November 14th to Sunday, November 16th at our Big Deal event. You can enter for a chance to win free wawa gas for a year, plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow! No purchase necessary. Visit BigDealEvent.com for more details.
[16:12] Are there any questions for Mark? I have one, may I? So this relates to what came up in Michael's wonderful talk too. Someone in the audience, I forgot who it was, they said, how does the use of an LLM from an epistemological standpoint, a chat bot that is, differ from the use of a calculator? How would you answer that? I mean it reminds me of
[16:39] some issues with maybe a symbolic approach and yeah I mean that's interesting I mean I think a calculator is so for one thing calculator a calculator doesn't necessarily lack explainability right it's very deterministic in terms of how you know how how the output is going to is is going to present itself you know it's because math is deterministic in a lot of ways and if you use a calculator to add two numbers together that answer is going to be the same
[17:08] Regardless of the context. But if you rely on a chat bot, because of the stochasticity that exists within these types of models, you might get different types of things. So it's different than, I would say, it's different than a calculator in that way. Wonderful. Any other? Oh, yes, Kurt. Would you still say that it's drastically different than a calculator?
[17:33] If it's entirely deterministic, as in like you have some kind of a decision tree where you ask it very specific things and it gives you very specific answers, is that kind of what you're describing? Something of that sort?
[17:53] Oh, I see what you're saying. Yeah, in that way for sure.
[18:10] Yeah, thank you, Mark. Actually, this brings up something we were talking about Wednesday night, and I'll say it for everybody else for their benefit, right? In the sense that we were talking a little bit like, I'm actually really happy when I see how poorly Congress does, right? Or how slow things change, right? Because rapid change is much scarier, right? Say things were efficient the way we want it to when something changes, right? It just happens, right? Now, in the same way like I was thinking while your talk was going on,
[18:36] You know, couldn't the fight against epistemic erosion actually be sort of weaponized in itself, right? So take the, and maybe this kind of connects up to when Elon Musk is talking about truth-maximizing GPT-12 or whatever it is he wants to make, right? It feels like you could just weaponize that as easily as the erosion, right? So you get the negative consequence because you put into place a system, right, that fights the erosion but leans a particular way. And I'm just wondering, like,
[19:03] I think there's a lot in there what you just said and I think you're absolutely right. I think there are certainly opportunities for someone with autocratic tendencies or some ideological bent to create
[19:31] sort of a fine-tuned version of some of these chatbots that sort of tow the party line. And that leads to its own dangers, epistemic dangers, because now you have a different potential set of truth upon which these bots are going to propagate information. I mean, you may have some stochastic deviations from that because of the black box issues, but yeah, that's certainly a problem. And you also sort of implied the, I guess,
[19:57] The glacial pace that government typically makes decisions. You're very true. And I think democracy is naturally a slow process, because a government has to be resistant to authoritarian tendencies. So if you have an authoritarian leader who comes in and wants to change everything, they can't do that. Because it has to build momentum in order for that to happen, which I think is, even though we complain a lot about the government bureaucracy being super slow, which especially for us, it can be infuriating at times.
[20:27] It's purposely slow. There's a reason for that sort of, you know, that pace, that glacial pace. One of the things that we do know is that state actors are already using the formation of the internet and social media to manipulate content and to provide structured responses and answers to people in ways that direct them to certain conclusions. And we know people are largely a product of the information they consume and their attitudes and behaviors are too. How much worse could
[20:57] a language model do when the state actors are already operating to manipulate attitudes and opinions through the use of some of these tools? Yeah, I mean, I think that's a great question. I see AI affecting that in a few different ways. So one, it's enabling. So it helps generate content that sort of toes that party line. You know, it also, it has the capacity to create chatbots that are, you know, more stochastic in a way. So they can, they're not just going to respond in a predetermined way, but they can
[21:24] you know, articulate things and respond to questions in a more human-like way. So in that way, you know, they might become more believable to some degree. But then, of course, that comes with its own risks. So you might have a totalitarian regime that has a particular view of history that it doesn't necessarily want its population to know. So if they have this sort of, you know, the uncertainty in some of these types of bots might not work in their favor and they might be disinclined to use these types of things because
[21:52] They can't necessarily guarantee that it would tow the party line in those ways. I have a question just going back to the explainability point. So if the system is, I cannot explain what's going on and I don't have a model to interpret how the system is getting the output. But if the output is always right or giving me reliable answers, why explain
[22:22] How the systems get the answers is irrelevant. If the answers are answers I can rely on or if I can trust in the algorithm. So why explainability would be a point if I have a reliable answer? Maybe this would be the point. So if it has a reliable answer, why? If it's a reliable answer, why I still need to explain how the algorithm got to that answer?
[22:48] Well, I think if it had a reliable answer, then explainability wouldn't necessarily be an issue. But I think because of the way that these models work, they're inherently unexplainable in certain ways. So that makes them not always reliable. So you get these hallucinations. You get these unusual outputs. Like we heard earlier from Scott, that's kind of almost a feature. You can't really code it out. But if they were, in fact, deterministic in some way where you could always rely on them to give you the same answer, then that wouldn't be a problem.
[23:16] Part of what government represents, and I'm not necessarily asking you a question, we work together, I just retired, but part of what government represents and what I find fascinating about these systems is the systems are to some extent attached to particular components of government mechanisms that make the system work. Therefore, if the people lose trust in that system,
[23:45] they could potentially lose trust in the governments that are quote-unquote responsible for monitoring or, you know, likewise presenting laws or regulations regulating such systems. So that's one of the problems with anarchy and chaos is that those systems might pose, is that correct? Yeah, I think that's a good observation. Since you said comment and not question, I mean, I'm just going to pause and say
[24:13] Yes, we have a significant kind of democracy and crisis problem. But talking with the mayor over lunch, there's also some great opportunities. I think this kind of echoes some things that Mike said where, number one, we could use these chat bots so now maybe some of these opaque bureaucratic processes of the government like figure out where your property line is can now be simply answered, right, if you have that chat bot, right, or your tax
[24:39] So I think any tool can be used for good or ill, and how do we take some of these tools and actually use it to make people's lives easier.
[24:51] like saying it just tritely and optimistically, you know, the Shining Path in Peru in the 1980s was defeated because they launched a TV show that was showing how they're reducing massive government bureaucracy. So now, and then people could go into that knowledge and make better, you know, say, hey, I can actually get this loan now or I can go to college because they made this rule change last night. So there, I think there is some good in here that if we can exploit this technology,
[25:19] Yeah, I mean, you're totally right. I mean, there are good and bad points to everything, including these large language model-driven types of chatbots and opportunities. There are definitely great opportunities to help make government perhaps more efficient, but there are also opportunities for sowing disinformation where a nefarious actor could use it in ways that it would erode democracy. Yeah, maybe just in response to that a little bit, now I'm starting to feel like a gadfly.
[25:48] I'm not trying to be, I promise. I guess in some way, tendency, I do sometimes worry about, say the situation where I go, well, wouldn't it make it so much easier if I could just interact with a large language model like a chat bot, right, in this kind of situation? And I tend to think of like, well, anytime I have to deal with Xfinity that told me the same thing, I go, well, is that really what I want to do? Do I really want to have to talk to a chat bot before then I get to this place? In some ways, I kind of like the human fallible kind of messed up nature of it.
[26:16] Right where I go, actually, I don't like what you tell me. I'm going to try to find somebody else. But if it's this first initial barrier of, well, to access your ability to influence what happens between you and your government has this barrier, I wonder, like, is that actually achieving the thing it's supposed to achieve? I don't know. Sorry, man. I think those points are valid. So, you know, just to the sense that, like, they found, like, the new generation, like, they don't want to call to make an appointment, right? They just want to go online. So, I think it's really
[26:43] Thank you so much for watching.
[27:02] Firstly, thank you for watching, thank you for listening. There's now a website, curtjymungle.org, and that has a mailing list. The reason being that large platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever they like.
[27:17] That's just part of the terms of service. Now, a direct mailing list ensures that I have an untrammeled communication with you. Plus, soon I'll be releasing a one-page PDF of my top 10 toes. It's not as Quentin Tarantino as it sounds like. Secondly, if you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself
[27:44] Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube, which in turn
[28:02] Greatly aids the distribution on YouTube. Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, they disagree respectfully about theories and build as a community our own toe. Links to both are in the description. Fourthly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts.
[28:30] I also read in the comments
[28:50] and donating with whatever you like. There's also PayPal. There's also crypto. There's also just joining on YouTube. Again, keep in mind it's support from the sponsors and you that allow me to work on toe full time. You also get early access to ad free episodes, whether it's audio or video. It's audio in the case of Patreon video in the case of YouTube. For instance, this episode that you're listening to right now was released a few days earlier.
[29:14] Every dollar helps far more than you think either way your viewership is generosity enough. Thank you so much
[29:33] Think Verizon, the best 5G network, is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today.
[29:59] Jokes aside, Verizon has the most ways to save on phones and plans where everyone in the family can choose their own plan and save. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal.
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 5498,
  "audio_duration_seconds": 1818.69,
  "completed_at": "2025-11-30T23:42:30Z",
  "segments": [
    {
      "end_time": 20.896,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science they analyze."
    },
    {
      "end_time": 36.067,
      "index": 1,
      "start_time": 20.896,
      "text": " Culture, they analyze finance, economics, business, international affairs across every region. I'm particularly liking their new insider feature. It was just launched this month. It gives you, it gives me, a front row access to The Economist's internal editorial debates."
    },
    {
      "end_time": 64.514,
      "index": 2,
      "start_time": 36.34,
      "text": " Where senior editors argue through the news with world leaders and policy makers in twice weekly long format shows. Basically an extremely high quality podcast. Whether it's scientific innovation or shifting global politics, The Economist provides comprehensive coverage beyond headlines. As a toe listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
    },
    {
      "end_time": 95.589,
      "index": 3,
      "start_time": 66.817,
      "text": " This episode is brought to you by State Farm. Listening to this podcast? Smart move. Being financially savvy? Smart move. Another smart move? Having State Farm help you create a competitive price when you choose to bundle home and auto. Bundling. Just another way to save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state."
    },
    {
      "end_time": 126.254,
      "index": 4,
      "start_time": 97.21,
      "text": " This Black Music Month, State Farm wants to take a moment to appreciate all the ways black music brings everyone together. From the Saturday morning soundtrack that gets the whole family cleaning the house. To the beats at the block parties that bring the whole community together. Celebrate the impact of black music this month and beyond with State Farm. Like a good neighbor, State Farm is there."
    },
    {
      "end_time": 149.36,
      "index": 5,
      "start_time": 127.5,
      "text": " Disinformation erodes democracy. Democracy requires intellectual effort. I see democracy as being this sort of metastable point if you think of it in some sort of a state space. A state space of different social configurations. And the reason it's metastable is because it takes a lot of effort. It requires an engaged population and it requires an agreed upon set of facts about what's real about the world."
    },
    {
      "end_time": 180.998,
      "index": 6,
      "start_time": 151.015,
      "text": " Mark Bailey is a faculty member at the National Intelligence University, where he is the department chair for cyber intelligence and data science, as well as being the co-director of the Data Science Intelligence Center. This talk was given at MindFest, put on by the Center for the Future Mind, which is spearheaded by Professor of Philosophy Susan Schneider. It's a conference that's annually held where they merge artificial intelligence and consciousness studies and held at Florida Atlantic University. The links to all of these will be in the description."
    },
    {
      "end_time": 204.991,
      "index": 7,
      "start_time": 180.998,
      "text": " There's also a playlist here for MindFest. Again, that's that conference, Merging AI and Consciousness. There are previous talks from people like Scott Aaronson, David Chalmers, Stuart Hammeroff, Sarah Walker, Stephen Wolfram, and Ben Gortzel. My name's Kurt Jaimungal, and today we have a special treat, because usually Theories of Everything is a podcast. What's ordinarily done on this channel is I use my background in mathematical physics, and I analyze various theories of everything,"
    },
    {
      "end_time": 233.097,
      "index": 8,
      "start_time": 204.991,
      "text": " Okay, so I want to welcome everybody to our final session of"
    },
    {
      "end_time": 244.258,
      "index": 9,
      "start_time": 233.422,
      "text": " This year's MindFest conference, which apparently I'm moderating even though I've had very little sleep, so I'm sure I'm going to mess up drastically and I apologize in advance."
    },
    {
      "end_time": 273.575,
      "index": 10,
      "start_time": 244.735,
      "text": " But our first speaker is Mark Bailey, who's a professor at National Intelligence University, wonderful co-author, and he's going to be talking about some of the same issues that Michael Lynch discussed, but from a national security perspective, and he's going to talk for just a short amount of time, and then we're going to have some Q&A, and then Stephen Gupta is going to be speaking about some philosophical issues, and then"
    },
    {
      "end_time": 303.558,
      "index": 11,
      "start_time": 273.985,
      "text": " I'm going to be asking questions of the audience and of course other people I mean the participants here at the roundtable and instead of going around the room for the video and Introducing ourselves one by one which would take all the video I just ask that before you ask your first question or make your first comment just say who you are Okay And of course at the end the audience will have an opportunity to ask questions as well Okay, so let's go ahead and get started"
    },
    {
      "end_time": 335.026,
      "index": 12,
      "start_time": 306.032,
      "text": " Awesome. Thank you so much, Susan. It's so great to be here. I love coming to MindFest. It's really a lot of fun and just a great group of people. So like Susan said, for those of you who don't know me, my name is Mark Bailey. I'm the department chair for cyber intelligence and data science at the National Intelligence University. You heard my dean speak earlier this morning, Dr. Tom Pike. But basically, a lot of people don't really know what the National Intelligence University is. And we're federally funded university."
    },
    {
      "end_time": 364.053,
      "index": 13,
      "start_time": 335.247,
      "text": " So I sometimes use the analogy of like a service academy. So like a West Point or like a Naval Academy because we serve government personnel. So we serve the intelligence community and military personnel who are adjacent to the intelligence community. So a lot of our work focuses pretty heavily on national security related issues. So my background. So I'm going to talk a little bit here about chat bot epistemology and sort of how that relates to democracy and"
    },
    {
      "end_time": 393.251,
      "index": 14,
      "start_time": 364.497,
      "text": " sort of the erosion of democratic norms. My background is actually in engineering. And so I fancy myself an engineer who plays a mathematician on TV, because I teach mostly math. And I also dabble in philosophy, because I teach a lot on AI ethics as well. And Susan and I publish a lot sort of in that realm. So I will begin here. What I really want to do here is sort of start the discussion, right? So we have this lovely panel here."
    },
    {
      "end_time": 422.961,
      "index": 15,
      "start_time": 393.712,
      "text": " A lot of what I talk about right now, Michael already covered wonderfully in his previous talk. But I do want to focus a bit on, you know, what I see as the major issues with AI chatbots and sort of how they relate to the erosion of democratic norms. So as we learned earlier, AI is a black box. So oftentimes AI problems are encapsulated into like three main issues. So you have explainability because of this black box issue. AI can be unpredictable. There are a lot of reasons for that."
    },
    {
      "end_time": 450.896,
      "index": 16,
      "start_time": 423.439,
      "text": " It's hard to understand how you could have a neural network with billions of parameters and then sort of map that to some deterministic function to understand exactly what's going on and why it makes decisions, the way that it does. And because of that unpredictability, sometimes you can have unexpected behaviors that are unaligned with human expectation. So that leads to what we call the alignment problem. So it's the ability to align AI with human expectations or human moral values or how you would want AI to behave."
    },
    {
      "end_time": 479.65,
      "index": 17,
      "start_time": 451.391,
      "text": " And then by extension, you get into what we call the control problem, which is how do you ensure that you can control AI and ensure that it's aligned with human expectations and does what you want it to do? So there's this AI black box that leads to some of these security risks with chatbots. And we'll talk a little bit about that. AI is also an enabling tool. So we have a lot of issues with the spread of dis and misinformation online on Facebook and Twitter and other social media platforms."
    },
    {
      "end_time": 509.633,
      "index": 18,
      "start_time": 480.009,
      "text": " And a lot of times that's facilitated by like troll farms who create disinformation because they want to target what I would consider social fissure points within a society that they want to create discord in, right? So they might target specific issues that are divisive in different ways and they create these posts to kind of basically stir the pot a little bit and, you know, create social discord in that way. And so right now humans do that but AI chatbots are going to enable that"
    },
    {
      "end_time": 536.51,
      "index": 19,
      "start_time": 510.111,
      "text": " you know, at a grander scale because it's going to be a lot easier to use things like chat GPT to basically, you know, to create large amounts of information around a specific narrative and then propagate that on social media. And then, of course, if you have AI-empowered chat bots that are trained on this particular narrative, then it sort of compounds that problem. And that also leads back to the AI black box problem. So if you can't really predict how these things are going to behave,"
    },
    {
      "end_time": 564.258,
      "index": 20,
      "start_time": 536.8,
      "text": " There's going to be an added level of uncertainty if you have an AI-driven chatbot that's, you know, propagating and talking, you know, to other people online in this capacity. I'm sure a lot of you remember a few years ago Microsoft had this Tey chatbot which was, and this was pre-GPT, but it was this chatbot that they released on Twitter and like within, I don't know, a few hours it became like this vehement racist and like anti-Semite because Twitter is basically a cesspool of, you know, nonsense."
    },
    {
      "end_time": 593.592,
      "index": 21,
      "start_time": 564.428,
      "text": " And it learns from Twitter. And so it started to repeat all of these different really terrible things. And so we are going to see more of that, especially with these large language models enabling and empowering these types of devices. And then finally, disinformation erodes democracy. So as I'll talk about in a little bit, democracy requires intellectual effort. I see democracy as being this sort of metastable point if you think of it in some sort of a state space, a state space of like"
    },
    {
      "end_time": 623.814,
      "index": 22,
      "start_time": 594.258,
      "text": " You know, different social configurations. And the reason it's metastable is because it takes a lot of effort. It takes intellectual effort to manage and run a democracy because it requires an engaged population, and it requires an agreed upon set of facts about what's real about the world. And then you can debate policy about those facts, but you can't debate something if you don't agree on what the facts actually are. And so I see that problem sort of being exacerbated by a chat bot, and we'll talk a bit about that."
    },
    {
      "end_time": 653.387,
      "index": 23,
      "start_time": 625.145,
      "text": " So epistemic breakdown. So we define knowledge as justified true belief. And that's sort of the classical definition of what knowledge is from epistemology. And there's some nuances to that. And maybe that's not the best definition, but we won't go into that here. So even before chatbots, this epistemic breakdown, this breakdown of this justified true belief. And when I say justified, I mean there has to be some sort of a chain of justification"
    },
    {
      "end_time": 681.152,
      "index": 24,
      "start_time": 653.797,
      "text": " you know, that leads to validation of that knowledge. So, you know, you can kind of think of it like in academia, we cite our sources. And you do that because you have to map it back to some chain of reasoning to justify what it is, you know, you want to present. And that breaks down in a lot of ways. But even before these AI-enabled chatbots, this was already evident in social media. You know, we saw, I mentioned earlier a lot of the, you know, disinformation and everything that was propagated by troll farms and whatever else."
    },
    {
      "end_time": 711.049,
      "index": 25,
      "start_time": 681.527,
      "text": " It sort of breaks down this idea of what knowledge is and creates these echo chambers of unjustified belief and it propagates this dis and misinformation which erodes democracy. And like I mentioned, knowledge discernment requires cognitive effort. And if you're not willing to put or able to put in the cognitive effort to discern what's true and what's not, based on what you read on social media, based on what's propagated by a lot of these different chat bots and whatever else,"
    },
    {
      "end_time": 739.019,
      "index": 26,
      "start_time": 711.476,
      "text": " You know, you don't really, you're not able to really contribute intellectually to try and, you know, to understand and agree upon these facts and make, you know, I would say a valid discernment about the facts that you can then debate policy. So this ends up causing an over-reliance on confirmation bias, a heuristic that leads to unjustified or false belief. And then, you know, in that way these algorithms in some ways promote"
    },
    {
      "end_time": 768.251,
      "index": 27,
      "start_time": 739.531,
      "text": " you know, the amplification of extremism. And then, you know, like I said, as these large language models are integrated into some of these, you know, disinformation opportunities, it's just going to, you know, sort of catalyze this and accelerate this epistemic breakdown. Again, I mentioned the idea of this AI black box. So explainability. So if you have a chat bot that's powered by AI, even if you train it on a particular set"
    },
    {
      "end_time": 797.295,
      "index": 28,
      "start_time": 768.626,
      "text": " a ideological position or something, it may still behave in ways that you don't understand or you don't anticipate. So that's the whole problem with this AI black box. And then of course, you know, sort of this epistemic crisis that you see in democracy, right? So as I mentioned, knowledge determination is critical for a function of a healthy democracy. Yet these LLMs may write the news or be our personal epistemic assistants in different ways, right? So, you know, if you"
    },
    {
      "end_time": 826.92,
      "index": 29,
      "start_time": 797.858,
      "text": " If you rely too heavily on these LLMs, you kind of lose that chain of epistemic justification because you don't always know where the knowledge came from. Because it's essentially interpolated from the training set of these models. So there's no epistemic chain of justification that you can follow to validate the knowledge that you have about whatever you're asking it about. And then, of course, democracy requires an intellectually engaged population. And then more critically, this agreed upon set of facts upon which you can debate policy."
    },
    {
      "end_time": 857.056,
      "index": 30,
      "start_time": 827.517,
      "text": " And then when this chain of reasoning is broken, that creates this epistemic disintegration. So this has global security implications as well, this erosion of truth. So the erosion of democracy creates opportunities for totalitarian tendencies to take root. So as the capacity of individuals to ascertain truth and productively debate policies grounded in that truth degrades, humans are likely to relinquish their capacity for reasoning about policy to charismatic leaders whose rhetoric may align with"
    },
    {
      "end_time": 885.64,
      "index": 31,
      "start_time": 857.534,
      "text": " biases in the population. So Michael, I think, did a very eloquent way of sort of explaining the fact that the epistemic situation of humans is very fragile. And so it can break pretty easily. And if it breaks, you may be more inclined to rely on these heuristics about how you understand the world. And sometimes those heuristics are things like confirmation bias or any other biases that you may have and internalize. And that may lead you to be more inclined to"
    },
    {
      "end_time": 915.811,
      "index": 32,
      "start_time": 885.964,
      "text": " sort of relinquish your ability to epistemically analyze or make some inference about what's true and what's not to some charismatic leader. And so that leads to a less, I would say, more stable form of social structuring, which would be something that would be more totalitarian, because it takes less effort. It's more energetically favorable in that way. So this degradation of healthy democracies because of this epistemic erosion"
    },
    {
      "end_time": 936.63,
      "index": 33,
      "start_time": 916.169,
      "text": " may create opportunities for the emergence of potentially a new global hegemony built on some authoritarian world view. And you may see countries sort of lurching toward this authoritarian tendency because of this accelerated spread of disinformation and the erosion of our ability to discern what's true and what's not and then debate appropriate policies on that. So thank you so much."
    },
    {
      "end_time": 963.882,
      "index": 34,
      "start_time": 937.312,
      "text": " Hola, Miami! When's the last time you've been in Burlington? We've updated, organized, and added fresh fashion. See for yourself Friday, November 14th to Sunday, November 16th at our Big Deal event. You can enter for a chance to win free wawa gas for a year, plus more surprises in your Burlington. Miami, that means so many ways and days to save. Burlington. Deals. Brands. Wow! No purchase necessary. Visit BigDealEvent.com for more details."
    },
    {
      "end_time": 998.166,
      "index": 35,
      "start_time": 972.073,
      "text": " Are there any questions for Mark? I have one, may I? So this relates to what came up in Michael's wonderful talk too. Someone in the audience, I forgot who it was, they said, how does the use of an LLM from an epistemological standpoint, a chat bot that is, differ from the use of a calculator? How would you answer that? I mean it reminds me of"
    },
    {
      "end_time": 1028.183,
      "index": 36,
      "start_time": 999.087,
      "text": " some issues with maybe a symbolic approach and yeah I mean that's interesting I mean I think a calculator is so for one thing calculator a calculator doesn't necessarily lack explainability right it's very deterministic in terms of how you know how how the output is going to is is going to present itself you know it's because math is deterministic in a lot of ways and if you use a calculator to add two numbers together that answer is going to be the same"
    },
    {
      "end_time": 1053.114,
      "index": 37,
      "start_time": 1028.387,
      "text": " Regardless of the context. But if you rely on a chat bot, because of the stochasticity that exists within these types of models, you might get different types of things. So it's different than, I would say, it's different than a calculator in that way. Wonderful. Any other? Oh, yes, Kurt. Would you still say that it's drastically different than a calculator?"
    },
    {
      "end_time": 1072.944,
      "index": 38,
      "start_time": 1053.763,
      "text": " If it's entirely deterministic, as in like you have some kind of a decision tree where you ask it very specific things and it gives you very specific answers, is that kind of what you're describing? Something of that sort?"
    },
    {
      "end_time": 1089.906,
      "index": 39,
      "start_time": 1073.695,
      "text": " Oh, I see what you're saying. Yeah, in that way for sure."
    },
    {
      "end_time": 1116.22,
      "index": 40,
      "start_time": 1090.401,
      "text": " Yeah, thank you, Mark. Actually, this brings up something we were talking about Wednesday night, and I'll say it for everybody else for their benefit, right? In the sense that we were talking a little bit like, I'm actually really happy when I see how poorly Congress does, right? Or how slow things change, right? Because rapid change is much scarier, right? Say things were efficient the way we want it to when something changes, right? It just happens, right? Now, in the same way like I was thinking while your talk was going on,"
    },
    {
      "end_time": 1142.961,
      "index": 41,
      "start_time": 1116.92,
      "text": " You know, couldn't the fight against epistemic erosion actually be sort of weaponized in itself, right? So take the, and maybe this kind of connects up to when Elon Musk is talking about truth-maximizing GPT-12 or whatever it is he wants to make, right? It feels like you could just weaponize that as easily as the erosion, right? So you get the negative consequence because you put into place a system, right, that fights the erosion but leans a particular way. And I'm just wondering, like,"
    },
    {
      "end_time": 1171.391,
      "index": 42,
      "start_time": 1143.234,
      "text": " I think there's a lot in there what you just said and I think you're absolutely right. I think there are certainly opportunities for someone with autocratic tendencies or some ideological bent to create"
    },
    {
      "end_time": 1196.988,
      "index": 43,
      "start_time": 1171.834,
      "text": " sort of a fine-tuned version of some of these chatbots that sort of tow the party line. And that leads to its own dangers, epistemic dangers, because now you have a different potential set of truth upon which these bots are going to propagate information. I mean, you may have some stochastic deviations from that because of the black box issues, but yeah, that's certainly a problem. And you also sort of implied the, I guess,"
    },
    {
      "end_time": 1227.398,
      "index": 44,
      "start_time": 1197.551,
      "text": " The glacial pace that government typically makes decisions. You're very true. And I think democracy is naturally a slow process, because a government has to be resistant to authoritarian tendencies. So if you have an authoritarian leader who comes in and wants to change everything, they can't do that. Because it has to build momentum in order for that to happen, which I think is, even though we complain a lot about the government bureaucracy being super slow, which especially for us, it can be infuriating at times."
    },
    {
      "end_time": 1256.51,
      "index": 45,
      "start_time": 1227.671,
      "text": " It's purposely slow. There's a reason for that sort of, you know, that pace, that glacial pace. One of the things that we do know is that state actors are already using the formation of the internet and social media to manipulate content and to provide structured responses and answers to people in ways that direct them to certain conclusions. And we know people are largely a product of the information they consume and their attitudes and behaviors are too. How much worse could"
    },
    {
      "end_time": 1284.121,
      "index": 46,
      "start_time": 1257.039,
      "text": " a language model do when the state actors are already operating to manipulate attitudes and opinions through the use of some of these tools? Yeah, I mean, I think that's a great question. I see AI affecting that in a few different ways. So one, it's enabling. So it helps generate content that sort of toes that party line. You know, it also, it has the capacity to create chatbots that are, you know, more stochastic in a way. So they can, they're not just going to respond in a predetermined way, but they can"
    },
    {
      "end_time": 1312.005,
      "index": 47,
      "start_time": 1284.497,
      "text": " you know, articulate things and respond to questions in a more human-like way. So in that way, you know, they might become more believable to some degree. But then, of course, that comes with its own risks. So you might have a totalitarian regime that has a particular view of history that it doesn't necessarily want its population to know. So if they have this sort of, you know, the uncertainty in some of these types of bots might not work in their favor and they might be disinclined to use these types of things because"
    },
    {
      "end_time": 1341.51,
      "index": 48,
      "start_time": 1312.278,
      "text": " They can't necessarily guarantee that it would tow the party line in those ways. I have a question just going back to the explainability point. So if the system is, I cannot explain what's going on and I don't have a model to interpret how the system is getting the output. But if the output is always right or giving me reliable answers, why explain"
    },
    {
      "end_time": 1367.91,
      "index": 49,
      "start_time": 1342.005,
      "text": " How the systems get the answers is irrelevant. If the answers are answers I can rely on or if I can trust in the algorithm. So why explainability would be a point if I have a reliable answer? Maybe this would be the point. So if it has a reliable answer, why? If it's a reliable answer, why I still need to explain how the algorithm got to that answer?"
    },
    {
      "end_time": 1393.78,
      "index": 50,
      "start_time": 1368.37,
      "text": " Well, I think if it had a reliable answer, then explainability wouldn't necessarily be an issue. But I think because of the way that these models work, they're inherently unexplainable in certain ways. So that makes them not always reliable. So you get these hallucinations. You get these unusual outputs. Like we heard earlier from Scott, that's kind of almost a feature. You can't really code it out. But if they were, in fact, deterministic in some way where you could always rely on them to give you the same answer, then that wouldn't be a problem."
    },
    {
      "end_time": 1425.23,
      "index": 51,
      "start_time": 1396.698,
      "text": " Part of what government represents, and I'm not necessarily asking you a question, we work together, I just retired, but part of what government represents and what I find fascinating about these systems is the systems are to some extent attached to particular components of government mechanisms that make the system work. Therefore, if the people lose trust in that system,"
    },
    {
      "end_time": 1453.353,
      "index": 52,
      "start_time": 1425.555,
      "text": " they could potentially lose trust in the governments that are quote-unquote responsible for monitoring or, you know, likewise presenting laws or regulations regulating such systems. So that's one of the problems with anarchy and chaos is that those systems might pose, is that correct? Yeah, I think that's a good observation. Since you said comment and not question, I mean, I'm just going to pause and say"
    },
    {
      "end_time": 1479.394,
      "index": 53,
      "start_time": 1453.848,
      "text": " Yes, we have a significant kind of democracy and crisis problem. But talking with the mayor over lunch, there's also some great opportunities. I think this kind of echoes some things that Mike said where, number one, we could use these chat bots so now maybe some of these opaque bureaucratic processes of the government like figure out where your property line is can now be simply answered, right, if you have that chat bot, right, or your tax"
    },
    {
      "end_time": 1490.811,
      "index": 54,
      "start_time": 1479.616,
      "text": " So I think any tool can be used for good or ill, and how do we take some of these tools and actually use it to make people's lives easier."
    },
    {
      "end_time": 1518.695,
      "index": 55,
      "start_time": 1491.067,
      "text": " like saying it just tritely and optimistically, you know, the Shining Path in Peru in the 1980s was defeated because they launched a TV show that was showing how they're reducing massive government bureaucracy. So now, and then people could go into that knowledge and make better, you know, say, hey, I can actually get this loan now or I can go to college because they made this rule change last night. So there, I think there is some good in here that if we can exploit this technology,"
    },
    {
      "end_time": 1548.336,
      "index": 56,
      "start_time": 1519.002,
      "text": " Yeah, I mean, you're totally right. I mean, there are good and bad points to everything, including these large language model-driven types of chatbots and opportunities. There are definitely great opportunities to help make government perhaps more efficient, but there are also opportunities for sowing disinformation where a nefarious actor could use it in ways that it would erode democracy. Yeah, maybe just in response to that a little bit, now I'm starting to feel like a gadfly."
    },
    {
      "end_time": 1576.049,
      "index": 57,
      "start_time": 1548.677,
      "text": " I'm not trying to be, I promise. I guess in some way, tendency, I do sometimes worry about, say the situation where I go, well, wouldn't it make it so much easier if I could just interact with a large language model like a chat bot, right, in this kind of situation? And I tend to think of like, well, anytime I have to deal with Xfinity that told me the same thing, I go, well, is that really what I want to do? Do I really want to have to talk to a chat bot before then I get to this place? In some ways, I kind of like the human fallible kind of messed up nature of it."
    },
    {
      "end_time": 1602.858,
      "index": 58,
      "start_time": 1576.408,
      "text": " Right where I go, actually, I don't like what you tell me. I'm going to try to find somebody else. But if it's this first initial barrier of, well, to access your ability to influence what happens between you and your government has this barrier, I wonder, like, is that actually achieving the thing it's supposed to achieve? I don't know. Sorry, man. I think those points are valid. So, you know, just to the sense that, like, they found, like, the new generation, like, they don't want to call to make an appointment, right? They just want to go online. So, I think it's really"
    },
    {
      "end_time": 1619.258,
      "index": 59,
      "start_time": 1603.217,
      "text": " Thank you so much for watching."
    },
    {
      "end_time": 1637.654,
      "index": 60,
      "start_time": 1622.722,
      "text": " Firstly, thank you for watching, thank you for listening. There's now a website, curtjymungle.org, and that has a mailing list. The reason being that large platforms like YouTube, like Patreon, they can disable you for whatever reason, whenever they like."
    },
    {
      "end_time": 1664.121,
      "index": 61,
      "start_time": 1637.892,
      "text": " That's just part of the terms of service. Now, a direct mailing list ensures that I have an untrammeled communication with you. Plus, soon I'll be releasing a one-page PDF of my top 10 toes. It's not as Quentin Tarantino as it sounds like. Secondly, if you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself"
    },
    {
      "end_time": 1682.619,
      "index": 62,
      "start_time": 1664.121,
      "text": " Plus, it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube, which in turn"
    },
    {
      "end_time": 1710.845,
      "index": 63,
      "start_time": 1682.807,
      "text": " Greatly aids the distribution on YouTube. Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, they disagree respectfully about theories and build as a community our own toe. Links to both are in the description. Fourthly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts."
    },
    {
      "end_time": 1730.794,
      "index": 64,
      "start_time": 1710.845,
      "text": " I also read in the comments"
    },
    {
      "end_time": 1754.258,
      "index": 65,
      "start_time": 1730.794,
      "text": " and donating with whatever you like. There's also PayPal. There's also crypto. There's also just joining on YouTube. Again, keep in mind it's support from the sponsors and you that allow me to work on toe full time. You also get early access to ad free episodes, whether it's audio or video. It's audio in the case of Patreon video in the case of YouTube. For instance, this episode that you're listening to right now was released a few days earlier."
    },
    {
      "end_time": 1760.828,
      "index": 66,
      "start_time": 1754.258,
      "text": " Every dollar helps far more than you think either way your viewership is generosity enough. Thank you so much"
    },
    {
      "end_time": 1794.531,
      "index": 67,
      "start_time": 1773.677,
      "text": " Think Verizon, the best 5G network, is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store today."
    },
    {
      "end_time": 1818.695,
      "index": 68,
      "start_time": 1799.224,
      "text": " Jokes aside, Verizon has the most ways to save on phones and plans where everyone in the family can choose their own plan and save. So bring in your bill to your local Miami Verizon store today and we'll give you a better deal."
    }
  ]
}

No transcript available.