Audio Player

Starting at:

Theories of Everything with Curt Jaimungal

Daniel Schmachtenberger: Ai Risks & The Metacrisis

May 17, 2023 4:29:05 undefined

⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.

Transcript

Enhanced with Timestamps
618 sentences 41,708 words
Method: api-polled Transcription time: 261m 59s
[0:00] The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
[0:26] I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
[0:53] As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
[1:06] This is a real good story about Bronx and his dad Ryan, real United Airlines customers. We were returning home and one of the flight attendants asked Bronx if he wanted to see the flight deck and meet Kath and Andrew. I got to sit in the driver's seat. I grew up in an aviation family and seeing Bronx kind of reminded me of myself when I was that age. That's Andrew, a real United pilot. These small interactions can shape a kid's future. It felt like I was the captain. Allowing my son to see the flight deck will stick with us forever. That's how good leads the way.
[1:36] As of today, we are in a war that has moved the atomic clock closer to midnight than it has ever been. We're dealing with nukes and AI and things like that. We could easily have the last chapter in that book if we are not more careful about confident, wrong ideas.
[1:55] This is a different sort of podcast, not only because it's Daniel Schmottenberger, one of the most requested guests, who, by the way, I'll give an introduction to shortly, but also because today marks season three of the Theories of Everything podcast. Each episode will be far more in-depth, more challenging, more engaging, have more energy, more effort, and more thought placed into it than any single one of the previous episodes.
[2:19] Welcome to the season premiere of season three of the theories of everything podcast with myself Kurt Jaimungal. This will be a journey of a podcast.
[2:40] with several moments of pause, of tutelage, of reflection, of surprise appearances, even personal confessions. This is meant for you to be able to watch and re-watch, or listen and re-listen. As with every TOE podcast, there are timestamps in the description, and you can just scroll through to see the different headings, the chapter marks. I say this phrase frequently in the Theories of Everything podcast. This phrase, just get wet, which comes from Wheeler, and it's about how there are these abstruse concepts in mathematics,
[3:09] and you're mainly supposed to get used to them, rather than attempt to bang your head against a wall to understand it the first time through. It's generally in the rewatching that much of the lessons are acquired and absorbed and understood. While you may be listening to this, so either you're walking around and it's on YouTube or you're
[3:25] I don't know about you, but much or most in fact of the podcasts that I watch, I walk away with this feeling like I've learned something but I actually haven't and the next day if you ask me to recall, I wouldn't be able to recall much of it.
[3:51] That means that they're great for being entertaining and feeling like I'm learning something. That is the feelings of productivity. But if I actually want to deep dive into subject matter, it seems to fail at that, at least for myself. Therefore, I'm attempting to solve that by working with the interviewee. For instance, we worked with Daniel to making this episode and any episode that comes out from season three onward from this point onward to make it not only a fantastic podcast, but perhaps
[4:18] in this small, humble way to evolve what a podcast could be. You may not know this, but in addition to math and physics, my background is in filmmaking, so I know how powerful certain techniques can be with regards to elucidation. How the difference between making a cut here or making a cut here can be the difference between you absorbing a lesson or it being forgotten. By the way, my name is Kurt Jaimungal, and this is a podcast called Theories of Everything, dedicated to investigating the versa-colored terrain of theories of everything
[4:47] primarily from a theoretical physics perspective, but also venturing beyond that to hopefully understand what the heck is fundamental reality. Get closer to it. Can you do so? Is there a fundamental reality? Is it fundamental? Because even the word fundamental has certain presumptions in it. I'm going to use almost everything from my filmmaking background and my mathematical background to make Toe the deepest dive, not only with the guest, but we'd like it to be the deepest dive on the subject matter that the guest is speaking about.
[5:14] It's so supplementary that it's best to call it complementary, as the aim is to achieve so much that there's no fat, there's just meat. It's all substantive, that's the goal. Now there's some necessary infrastructure of concepts to be explicated prior in order to gain the most from this conversation with Daniel, so I'll attempt to outline when needed
[5:32] Again, timestamps are in the description so you can go at your own pace, you can revisit sections. There will also be announcements throughout and especially at the end of this video, so stay tuned. Now, Daniel Schmottenberger is a systems thinker, which is different than reductionism primarily in its focus.
[5:47] So systems thinkers think about the interactions, the N2 or greater interactions, the second order or third order. And Daniel in this conversation is constantly referring to the interconnectivity of systems and the potential for unintended consequences. We also talk about the risks associated with AI. We also talk about their boons because that's often overlooked.
[6:07] plenty of alarmist talk is on this subject. When talking about the risks, we're mainly talking about its alignment or misalignment with human values. We also talk about why each route, even if it's aligned, isn't exactly salutary. About a third of the way through, Daniel begins to advocate for a cooperative orientation in AI development.
[6:24] where the focus is on ensuring that AI systems are designed to benefit and that there are safeguards placed in, much like any other technology. You can think about this in terms of a tweet, a recent tweet by Rob Miles, which says, it's not that hard to go to the moon, but in worlds that manage it, saying that these astronauts will probably die, is responded with a detailed technical plan showing all the fail-safes, testings, and procedures that are in place.
[6:46] They're not met with, hey, wow, what an extraordinarily speculative claim. Now, this cooperative orientation resonates with the concept of Nash equilibrium.
[6:57] A Nash equilibrium occurs when all players choose their optimal strategy given their beliefs about other people's strategies such that no one player can benefit from altering their strategy. Now that was fairly abstract. So let me give an instance. There's rock, paper, scissors, and you may think, Hey, how the heck can you choose an optimal strategy in this random game? Well, that's the answer. It's actually to be random. So a one third chance of being rock or paper or scissors.
[7:24] And you can see this because if you were to choose, let's say, one half chance of being rock, well then a player can beat you one half of the time by choosing their strategy to be paper, and then that means that you can improve your strategy by choosing something else.
[7:36] In game theory, a move is something that you do at a particular point in the game or it's a decision that you make. For instance, in this game, you can reveal a card, you can draw a card, you can relocate a chip from one place to another. Moves are the building blocks of games and each player makes a move individually in response to what you do or what you don't do or
[7:56] in response to something that they're thinking, a strategy, for instance. A strategy is a complete plan of action that you employ throughout the game. A strategy is your response to all possible situations, all situations that can be thrown your way. And by the way, that's what this upside down funny looking symbol is. This means for all in math and in logic. It's a comprehensive guide that dictates the actions you take in response to the players you cooperate with and also the players that you don't.
[8:27] A common misconception about Nash Equilibria is that they result in the best possible outcome for all players. Actually, most often they're suboptimal for each player. They also have social inefficiencies. For instance, the infamous prisoner's dilemma. Now this relates to AI systems, and as Daniel talks about, this has significant implications for AI.
[8:46] Do we know if AI systems will adopt cooperative or uncooperative strategies? How desirable or undesirable will those outcomes be? What about the nation states that possess them? Will it be ordered and positive or will it be chaotic and ataxic like the intersection behind me? Although it's fairly ordered right now, it's usually not like this. A stability of a Nash equilibrium refers to its robustness in face of small changes, perturbations in payoffs or strategies.
[9:11] An unstable Nash equilibrium can collapse under slight perturbations, leading to shifts in player strategies and then consequently a new Nash equilibrium. In the case of AI risk, an unstable Nash equilibrium could result in rapid and extreme harmful oscillations in AI behavior as they compete for dominance. And by the way, this isn't including that an AI itself may be fractionated in the way that we are as people with several selves inside us vying for control in a Jungian manner.
[9:42] Generalizations also have a huge role in understanding complex systems. So what occurs is you take some concept, and then you list out some conditions, and then you relax some of those conditions. You abstract away. Through the recognition of certain recurring patterns, we can construct frameworks, we can hypothesize, such that hopefully it captures not only this phenomenon, but a diverse array of phenomenon.
[10:04] The themes of theories of everything of this channel is what is fundamental reality? And like I mentioned, we generally explore that from a theoretical physics perspective, but we also abstract out and think, well, what is consciousness? Does that arise from material? Does it have a relationship to what's fundamental reality? What about philosophy? What does that have to say metaphysics? So that is generalizations empower prognostication, the discerning of patterns, and they streamline our examination of the environment that we seem to be embedded in.
[10:30] Now, in the realm of quantum mechanics, generalizations take on a specific significance. Now, given that we talk about probability and uncertainty, both in these videos, which you're seeing on screen now, and in this conversation with Daniel, thus it's fruitful to explore one powerful generalization of probabilities that bridges classical mechanics with quantum theory called quasi-probability distributions.
[11:00] Born in the early days of quantum mechanics, a quasi-probability distribution, also known as a QPD, bridges between classical and quantum theories, there's this guy named Eugene Wigner, who around 1932 published his paper on the quantum corrections of thermodynamic equilibriums, which introduces the Wigner function.
[11:20] What's notable here is that both position and momentum appear in this analog to the wave function when ordinarily you choose to work in the so-called momentum space or position space, but not both. To better grasp the concept, think of quasi-probability distributions as maps that encode quantum features into classical-like probability distributions.
[11:41] Whenever you hear the suffix like, you should immediately be skeptical as space-like isn't space and time-like isn't the same thing as time. In this instance, classical-like isn't classical. There's something called the Kalmagurov axioms of probability, and some of them are relaxed in these quasi-probability distributions. For instance, you're allowed negative.
[12:02] The development of QPDs expanded with the Glauber-Sadarsian p-representation.
[12:17] Introduced by Sudarshan in 1963, and refined by Glauber and Husami's Q representation in 1940. QPDs play a crucial role in quantum tomography, which allow us to reconstruct and characterize unknown quantum states. They also maintain their invariance under symplectic transformations preserving the structure of phase space dynamics. You can think of this as preserving the areas of parallelograms formed by vectors in phase space.
[12:44] Nowadays, QPDs have ventured beyond the quantum realm, inspiring advancements in machine learning and artificial intelligence. This is called quantum machine learning, and while it's in its infancy, it may be that the next breakthrough in lowering compute lies with these kernel methods and quantum variational encoders. By leveraging QPDs in place of density matrices, researchers
[13:06] researchers gain the ability to study quantum processes with reduced computational complexity. For instance, QPDs have been employed to create quantum-inspired optimization algorithms like the quantum-inspired genetic algorithm, QGA, which incorporates quantum superposition to enhance search and optimization processes.
[13:24] Quantum variational autoencoders can be used for tasks such as quantum states compression and quantum generative models, also quantum error mitigation. The whole point of this is that there are new techniques being developed daily, and unlike the incremental change of the past, there's a probability, a low one but it's non-zero, that one of these will remarkably and irrevocably change the landscape of technology.
[13:49] So generalizations are important. For instance, spin and GR. So general relativity is known to be the only theory that's consistent with being Lorentz invariance, having an interaction and being spin 2, something called spin 2. This means if you have a field and it's spin 2 and it's not free, so there's interactions,
[14:05] and its Lorentz invariant, then general relativity pops out, meaning you get it as a result. Now this interacting aspect is important because if you have a scalar, so if you have a spin zero field, then what happens is it couples to the trace of the energy momentum tensor, because there's nothing else for it to couple to, and it turns out that does reproduce Newton's law of gravity. However, as soon as you add an interacting relativistic matter, then you don't get that light bends.
[14:29] So then you think, well, let's generalize it to spin 1, and then there are some problems there, and you think, well, let's generalize it to spin 3 and above, and there's some no-go theorems by Weinberg there. By the way, the problem with spin 1 is that masses will repel for the same reason that in electromagnetism, that if you have same charges, they repel.
[14:44] Okay, other than just a handful of papers, it seems like we've covered all the necessary ground. And when there's more room to be covered, I'll cover it spasmodically throughout the podcast. There'll be links to the papers and to the other concepts that are explored in the description. Most of the prep work for this conversation seems to be out of the way. So now let's introduce Daniel Schmottenberger.
[15:04] Welcome, valued listeners and watchers. Today we're honored to introduce this remarkable guest, an extraordinary, extraordinary thinker who transcends conventional boundaries, Daniel Schmottenberger. What are the underlying causes that everything from...
[15:19] As a multidisciplinary aficionado, Daniel's expertise spans complex systems theory, evolutionary dynamics, and existential risk. Topics that challenge the forefront of academic exploration. Seamlessly melding different fields such as philosophy, neuroscience, and sustainability. He offers a comprehensive understanding of our world's most pressing challenges.
[15:47] Really, the thing we have to shift is the economy because perverse economic incentive is under the whole thing. There's no way that as long as you have a for-profit military industrial complex as the largest block of the global economy that you could ever have peace, there's an anti-incentive on it as long as there's so much money to be made with mining, et cetera, like we have to fix the nature of economic incentives.
[16:07] In 2018, Daniel co-founded the Consilience Project, a groundbreaking initiative that aims to foster societal-wide transformation via the synthesis of disparate domains promoting collaboration, innovation, as well as something we used to call wisdom. Today's conversation delves into AI, consciousness, and morality aligning with the themes of the Toe podcast.
[16:30] It may challenge your beliefs. It'll present alternative perspectives to the AI risk scenarios by also outlining the positive cases which are often overlooked. Ultimately, Daniel offers a fresh outlook on the interconnectedness of reality.
[16:53] So, you toe watchers you, my name is Kurt Jaimungal. Prepare for a captivating journey as we explore the peerless, enthralling world of Daniel Schmottenberger.
[17:04] Enjoy. A KFC tale in the pursuit of flavor. The holidays were tricky for the Colonel. He loved people, but he also loved peace and quiet. So he cooked up KFC's 499 Chicken Pot Pie. Warm, flaky, with savory sauce and vegetables. It's a tender chicken-filled excuse to get some time to yourself and step away from decking the halls. Whatever that means. The Colonel lived so we could chicken. KFC's Chicken Pot Pie. The best 499 you'll spend this season.
[17:32] I do not know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones. Alright, Daniel.
[18:00] What have you been up to in the past few years? The past few years, trying to understand the unfolding global situation and the trajectories towards existential and global catastrophic risk in particular.
[18:31] solutions to those that involve control mechanisms that create trajectories towards dystopias and the consideration of what a world that is neither in the attractor basin of catastrophe or dystopia looks like, a kind of third attractor, what would it take to have a civilization that could steward the power of exponential technology much better than we have stewarded all of our previous technological power
[19:01] What would that mean in terms of culture and in terms of political economies and governance and things like that? So thinking about those things and acting on specific cases of near term catastrophic risks that we were hoping to ameliorate and helping with various projects on how to transition institutions to be more intelligent and things like that. What are some of these near term catastrophic risks?
[19:32] Well, as of today, we are in a war that has moved the atomic clock closer to midnight than it has ever been. And that's a pretty obvious one. And if we were to write a book about the folly of the history of human hubris,
[20:02] we would get very concerned about where we are confident about where we're right, where we might actually be wrong and the consequences of it. And as we're dealing with nukes and AI and things like that, we could easily have the last chapter in that book if we are not more careful about confident wrong ideas. So what are all the assumptions in the way we're navigating that particular conflict that might not be right?
[20:32] What are the ways we are modeling the various sides and what would an end state that is viable for the world and that just at minimum doesn't go to a global catastrophic risk? That's an example. If we look at the domain of synthetic biology as an exponential, as a different kind of advanced technology, exponential tech, and we look at that the cost of
[20:58] Things like gene sequencing and then the ability to synthesize genomes, gene printing are dropping faster than Moore's law in cost. Well, open science means that the most virulent viruses possible studied in
[21:18] context that have ethical review boards getting open published then that's a situation where that knowledge combined with near-term decentralized gene printers is decentralized catastrophe weapons on purpose or even accidentally. There are heaps of examples in the environmental space if we look at our planetary boundaries climate change is the one people have the most awareness of publicly but if you look at the other planetary boundaries like the
[21:47] mining pollution or chemical pollution or nitrogen dead zones and oceans or biodiversity loss or species extinction. We've already passed certain tipping points. The question is how runaway are those effects? There was an article published a few months ago on PFAS and PFAS, the fluorinated surfactants forever chemicals as they're popularly called.
[22:12] that found higher than EPA allowable standards of them in rainwater all around the world, including in snowfall in Antarctica, because they actually evaporate. We're not slowing down on the production of those, and they're endocrine disruptors and carcinogens. That doesn't just affect humans, but affects things like the entirety of ecology and soil microorganisms. It's kind of a humongous effect.
[22:38] Those are all examples. And I would say right now, I know the topic of our conversation today is AI. AI is both a novel example of a possible catastrophic risk through certain types of utilization. It is also an accelerant to every category of catastrophic risk potentially. So that one has a lot of attention at the moment. So that makes AI different than the rest that you've mentioned? Definitely.
[23:08] Are you focused primarily on avoiding disaster or moving towards something that's much more heavenly or positive, like a Shangri-La? We have an assessment called the Meta Crisis. There's a more popular term out there right now, the Poly Crisis. We've been calling this the Meta Crisis since before coming across that term. Poly Crisis is the idea that
[23:33] The global catastrophic risk that we all need to focus on and coordinate on is not just climate change and it's not just wealth inequality and it's not just the breakdown of Pax Americana and the possibility of war or these species extinction issues, but it's lots of things. There's lots of different global catastrophic risks and that they interact with each other and they're complicated and there could even be cascades between them. We don't have to have climate change
[24:00] produce total venusification of the earth to produce a global catastrophic risk. It just has to increase the likelihood of extreme weather events in an area. And we've already seen that happening. Statistics on that seem quite clear. And it's not just total climate change, deforestation, affecting local transpiration and heat in an area can have an effect on, and total amount of pavement, whatever, can have an effect on extreme weather events.
[24:26] but extreme weather events. I mean, we saw what happened to Australia a couple of years ago when a significant percentage of a whole continent burned in a way that we don't have near term historical precedent for. We saw the way that droughts affected the migration that led to the whole Syrian conflict that got very close to a much larger scale conflict. The Australia situation happened to hit a low population density area, but there are plenty of high population density areas
[24:56] that are getting very near the temperatures that create total crop failures, whether we're talking about India, Pakistan, Bangladesh, Nigeria, Iran. If you have massive human migration, the UN currently predicts hundreds of millions of climate-mediated migrants in the next decade and a half, then it's pretty easy under those situations to have resource wars. Those can hit existing political fault lines and then technological amplification.
[25:25] In the past, we obviously had a lot less people. We only had half a billion people for the entirety of the history of the world until the Industrial Revolution. And then with the Green Revolution and nitrogen fertilizer and oil and like that, we went from half a billion people to a billion people overnight in historical timelines. And we went from those people mostly living on local subsistence to almost all living on dependent upon
[25:55] very complicated supply chains now that are six continent mediated supply chain. So that means that there's radically more fragility in the life support systems so that local catastrophes can turn to breakdowns of supply chains, economic effects, et cetera, that affect people very widely. So polycrisis kind of looking at all that, metacrisis adds looking at the underlying drivers of all of them. Why do we have all of these issues?
[26:22] and what would it take to solve them not just on point-by-point basis but to solve the underlying basis. So we can see that all of these have to do with coordination failures. We can see that underneath all of them there are things like perverse economic interests. If the cost of the environmental pollution to clean it up was something where in the process of the corporation selling the PFAS as a surfactant for
[26:46] waterproofing clothes or whatever. It also had to pay for the cost to clean up its effect in the environment or the oil costs had to clean up the effect on the environment. So you didn't have the perverse incentive to externalize costs onto nature's balance sheet, which nobody enforces. Obviously, we don't know none of those environmental issues, right? That would be a totally different situation. So can we address perverse incentive writ large? That would require fundamental changes when we think of as economy and how we enact that. So political economy
[27:17] So we think about those things. So I would say with the Metacrisis Assessment, we'd say that we're in a very novel position with regard to catastrophic risk, global catastrophic risk, because until World War II, there was no technology big enough to cause a global catastrophic risk as a result of dumb human choices or human failure quickly. And then with the bomb, there was. It was the beginning. And that's a moment ago in evolutionary time, right?
[27:47] And if we reverse back a little bit before the bomb, until the industrial revolution, we didn't have any technology that could have caused global catastrophic risk even cumulatively. The industrial technology, extracting stuff from nature and turning it into human stuff for a little while before turning it into pollution and trash, where we're extracting stuff from nature in ways that destroy the environment faster than nature can replenish and then turning it into trash and pollution faster than it can be processed,
[28:13] in doing exponentially more of that because it's coupled to a economy that requires exponential growth to keep up with interest. That creates an existential risk, it creates a catastrophic risk within about a few centuries of cumulative effects and we're basically at that few century point
[28:30] And so that's very new. All of our historical systems for that, our historical systems for thinking about governance in the world didn't have to deal with those effects. We could just kind of think about the world as inexhaustible. And then, of course, when we got the bomb, we're like, all right, this is the first technology that rather than racing to implement, we have to ensure that no one ever uses. In all previous technologies, there was a race to implement it. It was a very different situation.
[28:58] But since that time, a lot more catastrophic technologies have emerged. Catastrophic technologies in terms of applications of AI and synthetic biology and cyber and various things that are way easier to build than nukes and way harder to control. When you have many actors that have access to many different types of catastrophic technology that can't be monitored, you don't get mutually assured destruction and those types of safeties.
[29:23] So we'd say that we're in a situation where the catastrophic risk landscape is novel. Nothing in history has been anything like it. And the current trajectory doesn't look awesome for making it through. What it would take to make it through actually requires change to those underlying coordination structures of humanity very deeply. So I don't see a model where we do make it through those that doesn't also become a whole lot more awesome.
[29:51] And that's what we say, the only other example is to control for catastrophes, you can try to put very strong control provisions. Okay, so now, unlike in the past, people could, or pretty soon have gene drives where they could build pandemic weapons in their basement or drone weapons where they could take out infrastructure targets or now AI weapons even easier.
[30:12] We can't let that happen, so we need ubiquitous surveillance to know what everybody's doing in their basement, because if we don't, then the world is unacceptably fragile. So we can see catastrophes or dystopias, right, because most versions of ubiquitous surveillance are pretty terrible. And so if you can control decentralized action, if you don't control decentralized action, the current decentralized action is moving towards planetary boundaries and conflict and etc. If you
[30:39] control it, then what are the checks and balances on that control? Sorry, what do you mean control decentralized actions? So when we look at what it causes catastrophe, so when we're talking about environmental issues, there's not one group that is taking all the fish out of the ocean or causing species extinction or doing all the pollution. There's a decentralized incentive that lots of companies share.
[31:05] to do those things. So nobody's intentionally trying to remove all the fish from the ocean. They're trying to meet an economic incentive that they have that's associated with fishing. But the cumulative effect of that is overfishing the ocean, right? So if you try to, if there's a decentralized set of activity where the lack of coordination of everybody doing that, everybody pursuing their own near-term optimum creates the shitty term global minimum for everybody, right? A long-term bad outcome for everybody.
[31:34] If you try to create some centralized control against that, that's a lot of centralized power. And where are the checks and balances on that power? Otherwise, how do you create decentralized coordination? And similarly, if you're looking at things like in an age where terrorism can get exponential technologies, and you don't want exponentially empowered terrorism with catastrophe weapons for everyone,
[32:03] To be able to see what's being developed ahead of time, does that look like the degree of surveillance that nobody wants? To be able to control those things not happening, right? That's what I mean. So how to prevent the catastrophes, if the catastrophes are currently the result of the human motivational landscape in a decentralized way, if the solution is a centralized method powerful enough to do it, where are the checks and balances on that power? So a future that is neither
[32:32] cascading catastrophes nor control dystopias is the one that we're interested in. And so, yes, I would say the whole focus is that this is now AI comes back into the topic because a lot of people see possibilities for a very pro-topian future with AI where it can help solve coordination issues and solve lots of resource allocation issues. It also, and it can, it can also make lots of things. The catastrophe is worse and dystopia is worse. It's actually kind of unique in being able to make both of those things more powerful.
[33:03] Can you explain what you mean when you say that the negative externalities are coupled to an economy that depends on exponential growth? Yeah. If you think about it just in the first principle way, the idea is supposed to be something like there are real goods and services that people want that improve their life that we care about.
[33:32] The services might not be physical goods directly. They might be things humans are doing, but they still depend upon lots of goods, right? If you are going to provide a consultation over a Zoom meeting, you have to have laptops and satellites and power lines and mining and all those things. So you can't separate the service industry from the goods industry. So there's physical stuff that we want. And to mediate the access to that,
[34:02] and the exchange of it, we think about it through a currency. So it's supposed to be that there's this physical stuff and the currency is a way of being able to mediate the incentives and exchange of it. But the currency starts to gain its own physics, right? So we make a currency that has no intrinsic value, that is just representative of any kind of value we could want. But the moment we do something like interest, where we're now exponentiating the monetary supply, independent
[34:30] of an actual automatic growth of goods or services to not debase the value of the currency, you have to also exponentiate the total amount of goods and services. And everybody's seen how compounding interest works, right? Because you have a particular amount of interest and then you have interest on that amount of interest, so you do get an exponential curve. Obviously, that's just the beginning. Financial services as a whole and all of the dynamics where you have money making on money,
[34:58] mean that you expand the monetary supply on an exponential curve, which was based on the idea that there is a natural exponential curve of population anyways, and there's a natural growth of goods and services correlated. But that was true at an early part of a curve that was supposed to be an S curve, right? You have an exponential curve that in flex goes into an X curve, but we don't have the S curve part of the financial system planned.
[35:26] The financial system has to keep doing exponential growth or it breaks. And not only is that key to the financial system, because what does it mean to have a financial system without interest? Say it's a very deeply different system. Formalizing that was also key to our solution to not have World War III. The history of the world
[35:49] in terms of war does not look great, that the major empires and major nations don't stay out of violent conflict with each other very long. And World War I was supposed to be the war that ended all wars, but it wasn't. We had World War II. Now this one really has to be the war that ends all major superpower wars because of the bomb. We can't do that again. And the primary basis of wars, one of the primary bases had been resources.
[36:17] Which was a particular empire wanted to grow and get more stuff, and that meant taking it from somebody else. And so the idea of if we could exponentially grow global GDP, everybody could have more without taking each other's stuff. It's so highly positive sum that we don't have to go zero sum on more.
[36:36] So the whole post-World War II banking system, the Bretton Woods monetary system, et cetera, was part of the, how do we not have World War, along with mutually assured destruction, the UN and other international, intergovernmental organizations. But that let's exponentially grow the monetary system also meant that if you have a whole bunch more dollars and you don't have more goods and services, the dollars become worth less and it's just inflation and debasing the currency.
[37:05] So now you have an artificial incentive to keep growing the physical economy, which also means that the materials economy has to have an exponential amount of nature getting turned from nature into stuff, into trash and pollution in the linear materials economy. And you don't get to exponentially do that on the finite biosphere forever. So the economy is tied to interest and that's at the root of this of what you just explained, not at the root of every catastrophe.
[37:34] Interest is the beginning of what all of the financial services do, but there's an embedded growth obligation of which interest is the first thing you can see on the economic system. The embedded growth obligation that creates exponentiation of it tied to the physical world where exponential curves don't get to run forever is one of the problems.
[37:55] There are a handful. This is when we're thinking about a crisis. What are the underlying issues? This is one of the underlying issues. There's quite a few other ones that we can look at to say, if we really want to address the issues, we have to address it at this level. What's the issue with transitioning from something that's exponential to sub-exponential when it comes to the economy? What's the issue with it?
[38:25] There's a bunch of ways we could go. There is an old refrain from the hippie days that seems very obvious, I think, as soon as anyone thinks about it, which is that you can't run an exponential growth system on a finite planet forever. That seems kind of obvious and intuitive. Because it's so obvious and intuitive, there's a lot of counters to it. One counter is we're not going to run it on the finite planet forever. We're going to become an interplanetary species, mine asteroids, ship our ways to the sun, blah, blah, blah.
[38:55] I don't think that we are anywhere near close, independent of the ethical or aesthetic argument on if us obliterating our planet's carrying capacity and then exporting that to the rest of the universe is a good or lovely idea or not, independent of that argument. The timelines by which that could actually meet the humanity superorganism growing needs relative to the timelines where this thing starts failing don't work.
[39:25] So that's not an answer. That said, the attempt to even try to get there quicker is a utilization of resources here that is speeding up the breakdown here faster than it is providing alternatives. The other answer people have to why there could be an exponential growth forever is because digital, right? That more and more money is a result of software being created, a result of
[39:55] digital entertainment being created and that there's a lot less physical impact of that. And so we can keep growing digital goods because it doesn't affect the physical plan and physical supply chain. So we can keep the exponential growth up forever. That's very much the kind of Silicon Valley take on it. Of course, that has an effect. It does not solve the problem. And it's pretty straightforward to see why.
[40:25] which is for, let's go ahead and say software in particular. Does software have to run on hardware where the computer systems and server banks and satellites and et cetera require massive mining, which also requires a financial system and police and courts to maintain the entire cybernetic system that runs all that? Yes, it does.
[40:54] does a lot more compute require more of that more atoms adjacent services energy yes but also for us to consider software valuable it's either because we're engaging with what's we're engaging with what it's doing directly so that's the case in entertainment or education or something but then it is interfacing with the finite resource called human attention of which there is a finite amount
[41:24] or because we're not necessarily being entertained or educated or engaging with it, but it's doing something for us, again, to consider valuable. It is doing something to the physical world. So the software is engaging, say, supply chain optimization or new modeling for how to make better transistors or something like that.
[41:49] But then it's still moving atoms around using energy and physical space, which is a finite resource. If it is not either affecting the physical world or affecting our attention, why would we value it? We don't. So it still bottoms out on finite resources. So I can't just keep producing an infinite amount of software where you get more and more content that nobody has time to watch.
[42:13] and more and more designs for physical things that we don't have physical atoms for or energy for, you get a diminishing return on the value of it if it's not coupled to things that are finite, right? The value of it is in modulating things that are also finite. So there's a coupling coefficient there. You still don't get an exponential curve. So what we just did is say the old hippie refrain, you can't run an exponential economy on a finite planet forever. The alt, the counters to it don't hold.
[42:42] What about mind uploading or some computer brain interface to allow us to have more attention exponentially? That's almost like the hybrid of the other two, which is get beyond this planet and do it more digitally, so get beyond this brain.
[43:10] become digital gods in the singularity universe. Again, I think there are pretty interesting arguments we can have, ethically, aesthetically, and epistemically, about why that is neither possible nor desirable. But independent of those,
[43:41] I don't think it's anywhere close. And same like the multi-planetary species, it is nowhere near close enough to address any of the timelines we have by which economy has to change because the growth imperative on the economy as is, is moving us towards catastrophic tipping points. So if it were close, would that change your assessment or you still have other issues? If it were close, then we would have to say,
[44:09] First, that is implying that we have a good reason to think that it's possible. And that means all the axioms that consciousness is substrate independent, that consciousness is purely a function of compute, strong computationalism holds,
[44:32] that we could map the states of the brain and or if we believe in embodied cognition, the physiology adequately to represent that informational system on some other substrate that that could operate with an amount of energy that is and substrate that's possible, blah, blah, blah. So first we have to believe that's possible. I would question literally every one of the axioms or assumptions I just said. We're going to get to that.
[45:02] we would say, is it desirable? And how do we know? How ahead of time? And now you get something very much like, how do I know that the AI is sentient? Which for the most part on all AI risk topics, whether it's sentient or not is irrelevant, whether it does stuff is all that matters. But how do you tell if it's sentient and all of the
[45:28] are actually really hard because what we're asking is how can we use third-person observation to infer something about the nature of first-person, given the ontological difference between them? So how would we know that that future is desirable? Are there safe-to-fail tests and what would we have to test to know it to start making that conversion? But I don't
[45:57] I don't think we have to answer any of those questions because I don't think that anybody that is working on whole brain emulation thinks that we are close enough that it would address the timeline of the economy issues that you're addressing.
[46:13] Okay, so
[46:42] This is now much more a proper theory of everything conversation than the topic that we intended for the day, which is about AI risk. So what I will do is say briefly the conclusion of my thoughts on this without actually going into it in depth, but I would be happy to explore that at some point. I think
[47:08] that how I come to my position on it to try to do a kind of proper construction takes a while. So briefly, I'll say I'm not a strong computationist, meaning don't believe that mind, universe, sentience, qualia is purely a function of computation.
[47:37] I am not an emergent physicalist that believes that consciousness is an epiphenomena of non-conscious physics, that in the same way that we have weak emergence, more of a particular property through certain kind of combinatorics, or strong emergence, new properties emerging out of some type of interaction where that hadn't occurred before, like a cell respirating while none of the molecules that make it up respirate.
[48:04] I believe in weak emergence. That happens all the time. You get more of certain qualities. It happens in metallurgy when you combine metals where the combined tensile strength or shearing strength or whatever is more than you would expect as a result of the nature of how the molecular lattice is formed. You get more of a thing of the same type. I believe in strong emergence, which is you get new types of things that you didn't have before, like respiration and replication out of parts, none of which do that.
[48:31] But those are all still in the domain of third person, assessable things. The idea of radical emergence, that you get the emergence of first person out of third person, or third person out of first person, which is idealism on one side and physicalism on the other, I don't buy either of. I think that idealism and physicalism are similar types of reductionism, where they both take certain ontological assumptions to bootload their epistemology and then get self
[49:01] referential dynamics. So I don't think that if a computational system gets advanced enough automatically consciousness pops out of it. That's one. Two, I do think that the process of a system self-organizing is fundamentally connected to the nature of experience of selfness and things that are being designed and are not self-organizing where the
[49:31] Boundary between the system and its environment that exchanges energy and information and matter across the boundary is a Autopoetic process. I do believe that's fundamental to the nature of things that have self other recognition and
[49:52] on substrate independence. I do believe that carbon and silicon are different in pretty fundamental ways that don't orient to the same types of possibilities. I think that that's actually pretty important to the AI risk argument. I'll just go ahead and say those things. I also don't think
[50:22] I believe that embodied cognition in the Demasio sense is important and that a scan of purely brain states isn't sufficient. I also don't think that a scan of brain states is possible even in theory. Sorry to interrupt. I know you said you don't believe it's possible. What if it is? And you're able to scan your brain state and body state. So we take into account the embodied cognition. Sure.
[50:56] I think that, okay, it's not simply a matter of scanning the brain state. We need to scan the rest of the central nervous system. No, we also have to get the peripheral nervous system. No, we have to get the endocrine system. No, all of the cells have the production of and reception of neuroendocrine type things. We have to scan the whole thing. Does that then extend to the
[51:25] microbiome, virome, etc. I would argue yes. Does it then extend to the environment? I would argue yes. Where does that stop its extension is actually a very important question. So I would take the embodied cognition a step further. The other thing is Stuart Kaufman's arguments about quantum amplification to the mesoscopic level.
[51:55] that quantum mechanical events don't just fully cancel themselves out at the subatomic level and at the level of brains. Everything that is happening is straightforwardly classical, but that there is quantum mechanical, i.e. some fundamental kind of indeterminism built in phenomena that end up affecting what happens at the level of molecules.
[52:26] Now then, one can say, well, does that just mean we have to add a certain amount of a random function in, or is there something else? This is a big rabbit hole, I would say, for another time, because then you get into quantum entanglement and coherence, so you get something that is neither perfectly random, meaning without pattern. You get a born distribution even on a single one, but it's also not deterministic or with hidden variables.
[52:54] Do I think that what's happening in the brain-body system is not purely deterministic and also as a result of that means you could not measure or scan it even in principle in that kind of Heisenberg sense. Yes, I think that. Have you heard of David Walpart and his limits on inference systems, inference machines, sorry? I have not studied his work.
[53:18] Let me talk about the economy, which only on your podcast would happen.
[53:33] somehow this exponential curve starts to get to where the S is the top of the S that the halting or the slowing down of the economy is something that's so catastrophic and calamitous rather than something that would mutate and if we need to just at that point as it starts to slow down we make minor changes here and there is this something that's entirely new like will they all come crashing down
[53:59] Let me make the question clear. It sounds like, look, the economy is tied to exponential growth. We can't grow exponentially. Virtually no one believes that. So at some point, and let's just imagine it's three decades, just to give some numbers. So at some point, three decades from now, this exponential curve for all of the economy will start to show its legs and start to weaken and we'll see that it's nearing the S part.
[54:23] So what does that mean that there's been fire in the streets that the buildings don't work that the water doesn't run anymore like what will happen? Okay So people often make jokes about
[54:41] physicists in particular starting to look at biology and language and society and modeling in particularly funny reductionist ways because they tried to map the entire economy through the second law of thermodynamics or something like that. And because what we're really talking about is the maximally complex and anthropocomplex thing and embedded complexity we can because we're talking about all of human motives and how do humans respond to the idea that
[55:12] there's fundamentally limits on the growth possible to them, or there's less stuff possible for them, or whether it's issues that are associated with
[55:29] environmental extraction. So here's one of the classic challenges is that the problems, the catastrophic risks, many of them in the environmental category are the result of cumulative action long-term where the upsides are the result of individual action short-term and the asymmetry between those is particularly problematic. That's why you get this collective choice-making challenge, meaning if I cut down a tree for timber
[55:56] I don't obviously perceive the change to the atmosphere or to the climate or to watersheds or to anything. But my bank account goes up through being able to sell that lumber immediately. And the same is true if I fish or if I do anything like that. But when you run the Kantian categorical imperative across it and you have the movement from half a billion people doing it to a pre-industrial revolution to eight billion,
[56:23] and you have something like in the industrial world, a hundred X resource per capita consumption, uh, just calorically measured today than at the beginning of the industrial revolution. Then you start realizing, okay, well, the cumulative effects of that don't work. They break the, they break the planet and they start creating, um, tipping points that auto propagate in the wrong direction. But that, but no individual person or even local area doing the thing
[56:49] recognizes their action is driving that downside. And how do you get global enforcement of the thing? And if you don't get global enforcement, why should anyone let themselves be curtailed and other people aren't being curtailed and that'll give them game theoretic advantage?
[57:02] so this is actually there's a handful of asymmetries that are important to understand with regard to risk all right we've covered plenty so far and so it's fruitful to have a brief summary we've talked about the faulty foundation of our monetary system daniel argues that post world war two especially our economic system has not only encouraged but been dependent on exponential monetary growth
[57:24] and this can't continually occur we've also talked about the digital escape plan and how this is an illusion at least in Daniel's eye he believes that digital growth has physical costs because their hardware their human attention limits their finite resources linear resources as he calls them though i have my issues with the term linear resource because technically anything is linear when measured against itself
[57:44] We've also talked about how moving to Mars won't save us, us being civilization. Daniel believes that the idea of becoming an interplanetary species to escape resource limitations is unrealistic, perhaps even ethically questionable. We've also talked about how mind uploading is not what it's cracked up to be, it may not occur, and even if it does, it's not the answer. Because it's either unfeasible, but even if it's feasible, Daniel believes it to be undesirable.
[58:07] Another resource as we expand our digital footprint is the privacy of our digital resources. You can see this being recognized even by OpenAI as they recently announced an incognito mode. And this is where our sponsor comes in. Do you ever get the feeling that your internet provider knows more about you than your own mother? It's like they're in your head. They can predict your next move. When I'm researching complicated physics topics or checking the latest news or just in general what I want privacy on, I don't want to have to go and research which VPN is best. I don't want to be bothered by that.
[58:36] Well, I and you can put those fears to rest with Private Internet Access, the VPN provider that's got your back. With over 30 million downloads, they're the real deal when it comes to keeping your online activity private. And they've got apps for every operating system. You can protect 10 of your devices at once, even if you're unfortunate enough like me to love Windows.
[58:58] And if you're worried about strange items popping up in your search history, don't worry. I'm not judging. Private Internet Access comes in here as they encrypt your connection. They hide your IP address so your ISP doesn't have access to those strange items in your history. They make you a ghost online. It's like Batman's cave before your browsing history. With Private Internet Access, you can keep your odd internet searches, let's say, on the down low.
[59:24] It's like having your own personal confessional booth, except you never need to talk to a priest. So why wait? Head over to p-i-a-v-p-n dot com slash toe t-o-e and get yourself an 82, an 82% discount. That's less than the price of a coffee per month. And let's face it, your online privacy is worth way more than a latte. That's p-i-a-v-p-n dot com slash t-o-e now and get the protection you deserve.
[59:50] Brilliance is a place where there are bite-sized interactive learning experiences for science, engineering, and mathematics. Artificial intelligence in its current form uses machine learning, which uses neural nets, often at least, and there are several courses on Brilliance websites teaching you the concepts underlying neural nets and computation in an extremely intuitive manner that's interactive, which is unlike almost any of the tutorials out there. They quiz you. I personally took the course on random variable distributions and knowledge and uncertainty,
[60:19] Because I wanted to learn more about entropy, especially as there may be a video coming out on entropy, as well as you can learn group theory on their website, which underlies physics, that is SU3 x SU2 x U1 is the Standard Model Gauge Group. Visit brilliant.org slash TOE TOE to get 20% off your annual premium subscription. As usual, I recommend you don't stop before four lessons. You have to just get wet.
[60:44] You have to try it out. I think you'll be greatly surprised at the ease at which you can now comprehend subjects you previously had a difficult time grokking. The bad is the material from which the good may learn.
[61:03] So this is actually, there's a handful of asymmetries that are important to understand with regard to risk. One is this one that I'm saying, which is you have risks that are the result of long-term cumulative action, but that you actually have to change individual action because of that. But the upside, the benefit, the individual making that action realizes the benefit directly. And so this is a classic tragedy of the commons type issue, right? But tragedy of the commons at a, not just local scales, but at global scales.
[61:34] Some of the other asymmetries are particularly important is people who focus on the upside, who focus on opportunity, do better game theoretically for the most part than people who focus on risk when it comes to new technologies and advancement and progress in general. Because if someone says, hey, we thought Vioxx or DDT or any number of things were good ideas or leaded gasoline, they ended up being really bad later.
[62:03] We want to do really good long-term safety testing regarding first, second, third order effects of this. They're going to spend a lot of money and knock it first to market and then probably decide the whole thing wasn't a good idea at all.
[62:16] or if they do decide how to do a safe version, it takes them a very long time. The person says, no, the risks aren't that bad. Let me show you does a bullshit job of risk analysis as a box checking process and then really emphasizes the upsides is going to get first mover advantage, make all the money. They will privatize the gains, socialize the losses. Then when the problems get revealed long time later and are unfixable, that will have already happened. So these are just examples of some of the kind of choice making asymmetries.
[62:46] Totally. Not a particular corporation, but a particularly important consideration in the entire topic.
[63:02] One view is that Google is not coming out with something that's competitive, like Bard is not competitive. I think even Google would admit that. And so one view is that, well, they're highly testing. Another one, I've spoken to some people behind the scenes and they say Google doesn't have anything. They don't have anything like chat GPT. It's BS when they say so. Even OpenAI doesn't know why chat GPT works.
[63:24] I have heard a lot of things about the
[63:49] choices that both companies made to not release stuff and safety studies that they did, and then what influenced the choices to release stuff inside of Microsoft and OpenAI and how Google's handling it. I don't know that these stories are the totality of information on it that's relevant. Do I think that economic
[64:15] forcing functions have played a role in something that affected the safety analysis totally. Do I think that that is an unacceptably dumb thing on a topic that has this level of safety risk associated totally. So now getting into what is unique about AI risk.
[64:40] What is unique about it relative to all other risks? People are saying things like we need an FDA for AI right now, which I would argue is both true and a profoundly inadequate analogy because a single new chemical that comes out is not an agent. It is not a dynamic thing that continues to respond differently to huge numbers of new unpredictable stimuli.
[65:07] So how you do the assessment of the phase space of possible things is totally different. It would probably be good to dive into what is the risk space of AI, why is it unique, and how, given all of the differences of concern, how to framework and think about that properly.
[65:29] What else is unique about it? And why can't we have an FDA or a UN version of an FDA for AI? And when I say UN, sorry, what I mean is global. Yeah. Well, obviously you bring up UN and say global because you have to have global regulation on something like that, right? In the same way that when people talk about climate regulation, if we were, if any country, if any
[66:00] Group of countries was to try to price carbon properly, meaning what does it take to renewably produce those hydrocarbons and what does it take to in real time fix all of the effects, both sequester the CO2, clean up the oil spills, whatever it is.
[66:20] the price of oil would become high enough with those costs internalized that oil then as an input to industries, literally every industry would be non-profitable. And so even if any country was to try to make some steps in the direction of internalizing cost and other ones didn't, then the other ones who continue to externalize their costs get so much further ahead in terms of GDP that can be applied to militaries and surplus of many different kinds and advancing exponential tech.
[66:50] But insofar as those are also competing entities for world resources and control, that's not a viable thing. This is true for AI as well. And this then starts to hit this other issue, which is if you can't regulate something on a purely national level, because it's not just how does it affect the people in the nation, but how does it affect the nation's capability to interact with other nations,
[67:15] Now you get to the creation of the UN was kind of the recognition in the existence in the emergence of World War II that nation state governance alone was not adequate to prevent World War. Obviously, that's why the League of Nations came after World War I, and it was not strong enough to prevent World War II.
[67:33] Now you get to the topic of why so many people are super concerned about global government and don't want global government. And they'll say things like the risks are being exaggerated and blown out of proportion to be able to drive control paradigms. And the people who want to have a one world government or a powerful nation governments exaggerate the risks so that they can drive control paradigms where they will be the one in the control side. This can be excessive paranoia, but it's also
[68:02] a really realistic and founded consideration, which is, are there any radical asymmetries of power where the side that had all the power used it really well? Historically, it doesn't look that good, right? And so we see a reason to be concerned about something like a one world government that has no possible checks and balances.
[68:27] But there's also a concern about not having anything where you get some type of global governance, if not government, meaning some unified establishment that has monopoly of violence, at least governance, meaning some coordination where everyone is not left in a multipolar trap saying, we can't bind our behavior because they won't. And if they won't, then we have to race ahead, right? We can't stop overfishing because the fish will all get killed because they're doing the thing anyway. So not only will we not stop,
[68:56] We will actually race to do it faster than them so they don't get more resource relative to us, those types of issues. So obviously with regard to the environment, we call it a tragedy of the commons with regard to the development of possible military technology, we call it an arms race. Both of them are examples of social traps or multipolar traps. Briefly, why do you call it a multipolar trap?
[69:20] Social trap is a term used in the social sciences quite a lot to indicate a coordination failure of this type where each agent pursuing their own near-term rational interest creates a situation that moves the entire global situation long-term to a suboptimal equilibria for everybody. There's a lot of work in various fields of social science on social traps.
[69:45] The first time I'm aware of the term multipolar trap entering the conversation was in the great article called Meditations on Moloch by Scott Alexander, where I believe he's the one who coined the term multipolar trap there, and it's pretty close to a social trap. If I was going to define a distinction, it might be something like, in a classic tragedy of the commons scenario,
[70:14] where everyone is utilizing a common wealth resource like say fishing or cutting down trees in a forest or whatever. You're not necessarily in the situation where everyone is racing to do it faster than the other person to destroy it, just them simply not curtailing their own behavior. And yet you have a resource per capita consumption growth and a total population growth such that the environment can't deal with it.
[70:44] You still end up getting environmental devastation. But as soon as you kind of move over into, hey, even if I don't cut down the trees or I don't fish, the other side is going to, so I literally don't have the ability to protect the forest, but I do have the ability to cut down some of it, benefit myself or our people, our tribe, our nation or whatever it is.
[71:05] And if I don't, the other guys will break it down anyways, but they'll also use the economic advantage of that against us and whatever the next rivalrous conflict is. So not only do I have to keep doing it, but I have to race to do it faster than they do. I actually have to apply innovation now.
[71:21] And so this is where you get an accelerating dynamic. And if you don't just have two actors doing this, but you have many actors doing this where it's very hard to be able to bind it, because how do you ensure that all the actors are keeping the agreement? You have to make some non-proliferation agreement. You have to have some way of ensuring that they're all keeping it, and you have to have some enforceable deterrent if anyone violates it. Those happen, but it's not trivial. It's not trivial to enact those. And it's particularly, so let's say
[71:50] We've achieved that when it comes to nukes in some ways, though at the beginning of the current post-World War II system, there was only two superpowers with nukes and now there's roughly nine countries with them. There are not a hundred countries with them. There aren't even 30, because we've done a really intense job of ensuring that Iran and many countries that want nukes don't get them. And why?
[72:13] There are not uranium mines everywhere. You can see where they are. Uranium enrichment takes massive capability that you can literally see from space. There's radioactive activity associated. So it's somewhat easy to monitor that that's happening.
[72:30] This is not true at all with the newer technologies that provide more catastrophic capabilities. So obviously with AI right now and the regulation of it, there are conversations about like, we need to monitor all large GPU clusters or something like that, which to some degree can be done. But in terms of applications, it takes a very large GPU cluster to develop an LLM. It takes a very small one to run that LLM afterwards, right?
[72:58] and then can you run it for destructive purposes? And it takes a very large capability to advance something like a CRISPR or a new type of synthetic bio knowledge. It doesn't take that much to be able to reverse engineer it after it's been developed. So this brings up this very important point of
[73:28] When technology is built, there's this general refrain that all technologies dual use, right? Meaning that if it wasn't, sometimes it's developed for military purpose first and then becomes used for civilian normal market purposes. But if it's being developed for some non-military purposes, probably a militarized application.
[73:49] That's what's meant with dual use is military versus non-military. So it's not the same as this is a double-edged sword. It's positive and negative. It's not the same as that. Yeah, it is. What it's saying is you're developing this for some purpose, but it has other purposes too, right? And it has purposes that can be used for violence or conflict or destruction or something. And well, that is historically mostly used with the concept of has a military application can be used to advance war and killing and things like that.
[74:15] Sorry, when I was thinking of military, I was also thinking in terms of pure defense, not just defense that also can be something that can attack. Yeah. Yeah, the pure defense only military. It starts becoming part of most military doctrines that
[74:43] Viable defense requires things that look like escalation, but that's another topic as well. So it's not just that all technologies are dual use, it's that they have many uses. You develop a technology and I think a good way to think about, so now this is a little bit of theory of tech. Did we close multipolar trap?
[75:10] Well, you mentioned that it first came up in Scott Erickson's or Alexander. Yeah. And so basically the concept is you have many different agents who all of them pursuing their own rational interest and maybe they can't even avoid it because it would be so irrational. It would be so bad for them game theoretically.
[75:29] that the effect of each of the agents pursuing their own rational interests produces a global effect that is somewhere between catastrophic or at least far from the global optimum if they could coordinate better. So this is basically a particular type of multi-agent coordination failure. And we see this all over in the tragedy of the commons as an example, a market race to the bottom like happens in marketing and attention currently. It's an example. And then arms race is another example. Those would all be examples of a kind of multipolar trap coordination failure.
[75:59] This is why if you have, say, one nation is advancing bioweapons or advancing AI weapons, either AI applied to cyber or applied to drones or applied to autonomous weapons of various kinds. If any country is doing that, it is such an obvious strategic advantage that every other country has to be developing the same types of things plus the whole suite of counters and defenses to those types of things. You could just say, well,
[76:27] The world in which everybody has autonomous weapons is such a worse world than this world that we should just all agree not to do it. Except how do I know the other guy is actually keeping the agreement? Well, with the nukes we can tell because we can see if they're mining uranium and enriching it because it takes massive facilities and they're radioactive and stuff like that. But if we're talking about things like
[76:50] working with AI systems or synthetic biosystems that don't require a bunch of exotic materials, exotic mining that don't produce radioactive tracers, et cetera. And they can be done in a deep underground military base. How do we know if they're doing it or not? So if we don't know if the other side is doing it or not, then the game theory is you have to assume they are because if you assume they are, you're going to develop it as well.
[77:14] And then if they do have it and use it, you aren't totally screwed. Whereas the risk on the other assumption that they aren't, if you were wrong, you're totally screwed. So under not having full knowledge, the game theory orients to worst case scenario and being prepared against the worst case. But what that means is all sides assume that of each other. We don't know that the other guys are keeping the agreement. Therefore, we have to race ahead with this thing. And so this is why you're saying when it comes to things like AI,
[77:44] do we need something that is not just an FDA thing, but a UN thing? Is this the kind of thing that would require international agreement? And obviously, when there was the question of creating a pause on a six-month pause or whatever, one of the first things people brought up is, won't that let China race ahead? And isn't this a US-China competitiveness issue? And we can see with the CHIPS Act in trying to
[78:08] ban ASML downstream type GPUs to China. And we can see with the pressures over Taiwan and TSMC that there is actually a lot of US-China great power play competition related to computation and AI in specific. And so it's a classic situation that if you can't put certain types of
[78:33] Control mechanisms and internationally you will probably fail at being able to get them nationally as well
[78:40] So about this competition where the tragedy of the commons such that like, well, the competitiveness plus tragedy of the commons accelerates the tragedy of the commons. Why is it not much more simple, religiously simple, ethically simple, where we go back and we say, hey, what I'm going to do is outputting something negative. I don't care that if you do it, you're going to get ahead. I don't care if you're going to eliminate me. I would rather die for your sins rather than contribute my own sins. So the selflessness. Why isn't that sort of ethic? Like we say, we don't want to be Luddites.
[79:10] Why isn't that a solution? I mean, you're bringing up a great point, which is, can there be a long range thinking about the kind of world we want to live in and a recognition of the kind of beings we have to be, the behaviors we would have to do and not do for that world to come about where we bind ourselves, right? Where we have some kind of, whether the ethics reduces to law, meaning there's a monopoly of violence that backs up the thing or not. Can we at least self-police in some way towards it?
[79:39] The long-term answer must involve that, I would argue. Past examples have involved it, but let's talk about where it's limited. One could argue that the Sabbath and the punishments for violating the Sabbath is an example of binding a multipolar trap.
[80:05] You're not going to work on the Sabbath. And if you do, there's 29 different reasons lay that way. You can be killed for working on the Sabbath. It seems to secular people not thinking about the Chesterton fence deeply. It seems like a ridiculous, wacky religious idea, not grounded in anything with a ridiculous amount of consequence.
[80:31] Now, your theory of justice is, is it only a personal or is it a collective theory of justice? Right? Some theories of justice are your punishment is not based on just what was right for that one person, but creating an adequate deterrent for the entire population. Because if you don't, what happens? So a classic example is Singapore's drug policy is pretty harsh, right? Life in prison for just possession of drugs. Well, that was following the
[81:01] devastating effect that the British had on the Chinese with the opium wars and recognizing how as a kind of population centric warfare, the British were able to influence catastrophic damage on China. They're like, we don't want that here. And we know that there are external forces that will push to do that kind of thing. And it's not just personal choice once there are asymmetric forces trying to
[81:24] affect the most vulnerable people in the most vulnerable ways. So we're going to make it to where the deterrent on drug use is so bad, nobody will do it. So if you say that, you actually have to lock somebody up forever for smoking pot, which feels very unfair to them. But you probably only have to do that like a few times before nobody ever will fucking touch it because the deterrent is so bad and they believe it'll be enforced. We're hands off. And if the net effect on the society as a whole is
[81:52] that you don't have black markets associated with drugs and gangs and the violence that's associated and you don't have ODs and you don't have the susceptibility to population center of warfare and whatever. They might argue a utilitarian ethical calculus that the harsh punishment was radically less harm to the total situation than not having that. So you have a strong deterrent. I'm not saying that I think that is a
[82:20] adequate theory of justice, but it is a theory of justice, right? So let's say that the Sabbath was something like this, and I'm not saying that the rabbis that were creating it at the time thought this, though many people suggested that's probably what they thought. Some very competitive people wanting to get ahead will work every day. They'll work seven days a week, and as a result, they will
[82:49] they'll be able to get a little bit more grain farming, whatever, then other people get more surplus, start turning that into compounding benefits. And if anyone does, it'll create a competitive pressure where everyone has to. So nobody spends any time with their family. Nobody spends any time connecting to what binds the culture together, the religious idea, et cetera. So we're going to make a Sabbath where no one is even allowed to work. And there's such a harsh punishment against it that we're binding the multipolar trap, right?
[83:17] Because even though it would make sense in that person's rational interest to work that extra day a few times to get ahead, the net effect on the society cumulatively is actually a shittier world. So we're going to bind it because people having that time off to be with their family, each other and studying ethics is a good idea. I would argue that religion has heaps of examples like this of how do we bind our own behavior to be aligned with some ethic. But I would also argue
[83:45] Because that was the question you're asking, right? Is there some kind of religious bind to the multipolar trap? And I think the Sabbath is a good example. I think we can also show how well that didn't work for Tibet when China invaded, right? Which is we want to be non-violently oriented. We have a religion that's oriented towards non-violence. And we can see that there were
[84:14] If you think about at the time of Genghis Khan or Alexander the Great or whatever, where you have a set of worldviews that doesn't constrain itself in that way, and it's going to go initiate conflict with the people who didn't do anything to initiate it and don't want it. But the worldview that orients itself that way also develops military capability and maximum extraction for the surplus to do that thing. The other worldviews don't make it through, they get wiped out. So there are
[84:44] indigenous cultures and matriarchal cultures and whatever that we just don't even have anymore, don't even have the ideas of remnants because it just got wiped out by worrying cultures. And so does that produce the long-term world we want? No, it doesn't either. And so there has been this kind of multipolar trap on that the natural selection, if you want to call it that, of
[85:09] Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store
[85:37] Yeah, I don't buy that.
[85:59] so i'm not saying this as someone who's religious or from a religious perspective well this is a religious perspective sorry but i'm not saying this as someone who's advocating for a certain religion the most dominant religion in the world is christianity and that's the story of someone who had the government against him and he said no i'm not going to fight back in fact if you want to persecute me go ahead
[86:19] I will come to you. And one of the most striking stories, literally striking in the Bible to me, is the story of Jesus, the captor, and Peter, his friend, cut off the captor's ear. The guy was going to take Jesus to kill Jesus. And Jesus said, no, no, no, no, don't do that. And took the ear and healed his captor. So think about this, though. Yes, Jesus is the guy who said, let he who has no sins cast the first stone. And they brought Mary Magdalene and all those things.
[86:47] But we somehow did the Crusades in his name and the Inquisition in his name and the Dark Ages in his name, right? That's some weird-ass mental gymnastics. But the scenes, the versions that were going to stay peaceful and not do Crusades, how many do you see around and how much power did they get? So what happens is you have a bunch of different interpretations, the interpretations that orient themselves to power and to propagation, propagate.
[87:14] and make it through the interaction between the memes. So memes engage in a kind of competitive selection like genes do, but not individual memes, meme complexes. So if we have a religion that says, be humble, be quiet, listen to people and don't push your ideas on anybody. And then you have another one that says, go out and proselytize, convert everyone to your religion and kill the infidels.
[87:38] Which one gets more people involved, right? And so the ones that have propagation and that have, um, conflict ideas built right in. So of course then the meme sets evolve over time, right? The, the religious interpretations don't stay the same and the meme sets that end up winning
[88:01] through how they reduce themselves to the behaviors that affect war and population growth and governance, et cetera, are all part of it. So the fact that the Dudu said, let he who has no sins among you cast the first stone got to be the religion that became dominant through the Crusades and through violent expansionism and then through radical, torturous oppression is fascinating, right? And it shows you
[88:26] that you have like a real philosophy and then you have politics of power and you have fusions of those things that you have to understand both of when you're studying religion.
[88:36] To me, and I don't mean to harp on this point, but it doesn't have to be a choice between, hey, let me do good and let me not push my views on anyone and proselytizing slash killing. Because you can also proselytize and say your ideas and hopefully people, well, hopefully, maybe there is something in us, maybe there's something cosmically in us, I don't know, that says, hey, you know what? I like that. I don't like that killing. I don't like where that will lead. It resonates with me that the sins get passed down or that the violence gets passed down and amplified.
[89:02] But i need to be told that so i do need to hear that cuz i can't come up with that on my own so that's why i'm saying the proselytizing is a part of it whether proselytizing is explicit or it's lived and you just see how someone lives and then you inquire hey what are your views and why are you so happy when you have nothing and i'm so miserable and i have everything i just don't see it as a choice between you do good locally and don't tell anyone about it or you can tell people about your ethical system but also oppress them well to make the thought experiment we picked both extremes
[89:31] We can see that the Mormons proselytize, but they don't kill everyone who disagrees in the crusading kind of way. They have not expanded as much as the crusades. They've not got as much global dominance or total population as a result, but they have not got nowhere. We can see that
[89:54] The ones that say, hey, if someone is interested, we'll share, but we're not going to proselytize because we have a certain humility of how much we don't know and a respect for everyone's choice. The mystery schools stay pretty small. Again, when we were talking about asymmetries, those who are more focused on the opportunity and downplay the risk move ahead, get the investment capital, et cetera. Those who are focused on the risk heavily don't. There's a
[90:23] There's a similar thing here, which is like there's an asymmetry in the ideas that hit evolutionary drivers, even if perverse forms, right? Like in the evolutionary environment, it was where there was actual food scarcity, we evolved dopamine-ergic, dopamine-opioid responses to salt, fat and sugar, which were hard to get and useful.
[90:49] As soon as we got to the point where we could produce lots and lots of salt, fat, and sugar, and there was no scarcity on those things, our genetics didn't change. The fact that it felt really good when you ate that and incentivized you to get more of it, where that little bit of surplus might mean you make it through the famine versus not, it was an adaptive response. Then we create an Anthropocene where we have
[91:11] Hostess and McDonald's giving amounts of salt fat sugar that are in the combinations of them with the kind of optimized palatability where it is not only is it not evolutionarily useful to get it anymore, it is actually the primary cause of disease in the environments where that's available. It doesn't mean that the dopaminergic signal changed, right? So we're able to kind of take an evolutionary signal and hijack it
[91:37] And this is obviously what fast food does to the evolutionary programs around food. It's what social media does to the impulses for social connectivity. It's what porn does for the impulses to sexual connection associated with intimacy and procreation and all like that, is to extract the hypernormal stimuli from the rest of what makes it actually evolutionarily fit. Same thing can happen with religion, right? You can offer people an artificial sense of certainty
[92:06] and offer them an artificial sense of belonging and security and various things like that and without much actual deep philosophic consideration or necessarily even deep numinous experience.
[92:24] that similarly has the ability to scale more quickly than something where you want people to actually understand deeply, discover things themselves, have integrated experiences, not just do the right action, but for the right intrinsically emerging reasons, which is why, you know, your podcast doesn't have one of their as many views as, um,
[92:52] the most trending TikTok videos that require less work and are shorter and more oriented to hypernormal stimuli. So I'm not saying we can't work with these things. I'm saying these are the things we have to work with. So we're in a situation where the, let's say that where in-groups and out-groups would both cooperate and compete at different times based on what game theory seemed to make most sense. And
[93:21] They would typically cooperate while reserving the right to compete and to even fully defect if they need to, right? Resource scarcity or something. Or just a sociopath coming into leadership, which totally happens. So the combination of the worldviews, everybody needs to believe our religion. If they don't, they are bad. And so we're going to convert them or whatever, right? Or everyone needs to be
[93:47] have a democracy because that's good and all other forms of governance are bad or whatever it is. There's ideology that orients itself. There's a tech stack that is a part of the capacity to do that. There are coordination mechanisms that are a part to do that. So the full stack of the superstructure, the worldviews, the social structure and the infrastructure are what are engaged in in-group, out-group competitions and that are up-regulating, largely shaped by those competitions.
[94:13] It just happens to be that the version that makes it through that shaping process is also orienting us towards a whole suite of global catastrophic risks. It is basically self terminating. And so it has been the case that you have to win the local arms race because otherwise you lose. But the arms races that are externalizing harm, but on an exponential curve that have cumulative effects,
[94:39] you don't actually get to keep externalizing on an exponential curve or running arms races on an exponential curve in a finite space forever. So we're at this interesting space where you can't try to build an alternate world that just loses, but you also can't keep trying to win in the same definition of win.
[95:02] This is the interesting point we're at, which is we have to actually build a version of win that is not for an in-group in relationship to an out-group, but is something that actually allows some kind of omni-win that gets us out of those multipolar traps. And this was all coming from the topic of you starting with why you brought up the UN and that you have to deal with these things with some kind of sense of how are other people dealing with them and how does that affect the choice-making process?
[95:33] Some people would say, look, we're group selected and then we can make our group to be the tribe versus another tribe. And one of the solutions is if there was aliens and then we can bind together as humans and fight something external. It doesn't have to be aliens. The point is that there needs to be something extra. So he's saying there's another option and that that option, the bind together in order to fight some other out group, whether the group is something physical or it could be more abstract, that that's not something that should be pursued. And there's another option.
[96:02] I didn't say that, but it's an interesting conversation. If we are not binding in-groups to fight out-groups, so this is kind of like Machiavelli's enemy hypothesis, that people are kind of evolutionarily tribal and that to unify a lot of people at a much larger than tribal scale, given that they naturally will find their own differences and conflicts and
[96:32] reasons to otherize somebody because they have more influence over their own small group or whatever, to unify them works best if you have a shared enemy that forces them to unify. And so then you're, you eventually, of course, this makes small tribes unified to deal with a larger tribe. And then you get kingdoms and nation states and global economic trading blocks. And eventually you get great superpower conflicts.
[97:00] and that if the only way to unify, that if groups opposing each other in that way ends up being catastrophic for the world, so we want to get everybody unified in some way, do we need a shared enemy? Obviously, this has been talked about a gazillion times. Can climate change or environmental harm be the shared enemy? Not really.
[97:25] Even if everyone believed in it, which they don't, it doesn't hit people's agency bias in the same way and whatever. Could we stage a false flag alien invasion to make us unify? Of course, this has actually been an explored topic, both in sci-fi and reality. How deeply explored is a question, but
[97:53] Yes, it's a very natural topic to explore that something like a attack from the outside would allow that kind of unification. Because of that, there are people who are very skeptical and concerned of anything that looks like a presented shared threat that should create some unified response because then they're like, well, what is the government
[98:19] that will navigate that shared threat and who has any checks and balances on that if that thing becomes captured or corrupt. And so this is again the catastrophes or dystopias. If you don't have some coordination, you get these problems of coordination failure. If your coordination is imposed, you end up getting oppression dynamics. So how do you get coordination that is global but that is emergent, that has
[98:46] that keeps local power from doing things that drive multipolar traps, but that also ensures that you don't get centralized power that can be captured or corrupted. A system of coordination has to address both of those things. And as we move into more
[99:05] people with more resource consumption per capita and the cumulative tipping points on the biosphere being hit, but even more than that, exponentially more power available to exponentially more actors. Obviously, if we look at the history of how humans have used power and you put an exponential curve on that, it doesn't go well. That's one way of thinking about the coordination issue currently.
[99:35] When we were thinking about the UN or whatever is this global agency potentially, the phrase, they have no checks and balances comes up. Is there a way of organizing something that is global and influential that has its own internal checks and balances? I don't understand how the US political system works. It's my understanding that it's tripartite and antagonistic. I don't understand the details of it. I'm apolitical, at least consciously. I haven't looked into it, but the point is that's interesting. I don't know how that works. I wonder how much that doesn't work, how much that can be accelerated, amplified.
[100:06] Well, one point that we bring up is that any proposed system of coordination, governance, whatever, is not going to work the same way after it's been running for a long time as when it was initially developed because all of the systems have a certain kind of institutional decay or entropy built in that has to be considered.
[100:36] because every vested interest that is being bound has a vested interest in figuring out how to break the control system or capture or corrupt it or something, right? And so it's not just how do we build a system that does that, but it's also how do we build a system that continues to upregulate itself to deal with an increasingly complex, different world than the one it was originally designed for, and that continues to deal with the fact that wherever there is an incentive to game, the system is going to happen.
[101:04] So you have to not only figure out a system that makes sense currently, but a system that has an adaptive intelligence that is adequate for the changing landscape. So when you look at the U.S., because leaving corrupt monarchy was key to the founding here, and so we're going to try to do this democracy, non-monarchy thing. It was also the result of a change in tech, right? It was a result of the printing press.
[101:35] where rather than before a printing press and everyone could not have textbooks and couldn't have newspapers and to have access to information, someone had to copy a book by hand, which meant that there were very few of them or copy the information by hand, so only the wealthy could have it.
[101:53] the idea of a wealthy nobility class that got educated enough to make good choices for everyone else, where if they were too corrupt, the people would overthrow them. So there was a certain kind of checks and balance that kind of maybe made sense, right? With a noblesse oblige built into the obligation of the nobility class to rule wealth. I'm not saying it did, but that's at least the story. But as soon as the printing press comes and now everybody could have textbooks and get educated,
[102:19] And everybody could have a newspaper and know what's going on. It kind of debases the idea that you need an ability class to make all the choices because everyone else doesn't know what's really going on. And you say, well, maybe we could all get educated enough to understand how to process information and we could all get news to be able to understand what's going on and all have a say. And so obviously democracy emerged following that change in information tech. I'm saying this because of course,
[102:46] The difference in the AI case just briefly is that I don't see the AI is democratizing more so than exacerbating the inequality in terms of like, so if you're extremely bright, the amount of information you can process is going to be far outpacing someone who either is not so bright or gets access to that AI three weeks later.
[103:17] thinking through in the same way that the printing press had an effect on central religion through everybody can have a Bible and read it and learn on their own and kind of Lutheran revolution and it had an effect on central government in the form of feudalism. We can then look at kind of McLuhan's insights of how information tech changes the nature of the collective intelligence and motivation
[103:48] And as a result, the emerging type of society, we can look at the way that the internet and digital have already done that. Looking at the way social media has affected media, for instance, which affects our democratic systems is a pretty obvious one. But then we can look at AI and not just AI, but different types of AI, different ways it could develop. LLM is very different than other kinds of AIs. So we'll come to that in a moment, but let's come back to the other question because you were asking the checks and balances one.
[104:17] So the idea in the US system was the British system following the Magna Carta and the Treaty of Forest and whatever was supposed to be the most ideal noble thing around and ended up being
[104:38] The idea that no matter how you develop a system, it can be corrupted, that was built in. How do we make sure that no part gets too much power and that we have checks and balances throughout was key. Before you even get into the three branches of government, you already have the separation of the state and the church, which was already a key part, and you have the separation of the market and the state, which is
[105:08] You have a liberal democracy that is proposed, so you don't have a pure market function, but you also don't have that the state is running the entire economy. And so the separation of the market, the state, the church, there's a few other ways of thinking about separation was already
[105:32] a part of it. And then with regard to the state's function, the separation of the legislative, the judicial, and the executive were critical. And then within each of those, within the legislative, a bicameral breakdown was really important. And then the 10th Amendment was to push as much power, the subsidiary principle, to the states as possible and as little to the federal. So there were many, many steps of checks and balances on concentrated power that were built into the system.
[106:02] But of course, everyone who is smart, who is also agentic, who wants more power, looks for loopholes and or figures out how to write laws and to get them passed, right? Doing legislation and lobbying. And of course, corporations can pay for a lot more lawyers than an average citizen can or than a nonprofit group that doesn't have a revenue stream associated. So the group that is trying to turn
[106:30] a commons into commodities versus one that's trying to protect the commons will inherently have a bigger revenue stream to employ media to change everyone's mind or to employ campaign budgets or to employ lobbyists or whatever. So you end up seeing that there is a progressive kind of loophole finding corruption because the underlying incentive systems, invested interests are still there, right?
[106:58] Baudrillard simulation and simulacra that discusses the steps of the degradation from a new system to how it eventually devolves into mostly a simulation of what it originally was is a good analysis on this we could discuss. So that's a little bit on kind of the history of checks and balances on power, but I don't think anybody looks at our current US system and says it's doing a great job of that.
[107:25] And there's a bunch of reasons, in addition to the one that I said about how there is a natural process of figuring out how to influence this. There's one other part that's actually worth saying. So you have a state, you have a market, and you have the people as members of a democratic
[107:56] government, meaning their function in state, not their function in market. So government of, for, and by the people, the people might not all be representatives, but they can all speak to their representative, decide how it votes, those types of things, right? So there's supposed to be a check and balance between these three, that the main reason that there is law is to prevent some people or groups of people from doing things that they have an incentive to do that would suck for everybody else.
[108:28] Obviously, whether it's individual stealing or murder or whatever, or it's a corporation cutting down the national forests or polluting the waterways too much, somebody has an incentive to do something. In a democracy where the idea is supposed to be that we all want and value different things, but the collective will of the people as determined through some voting process gets instantiated into law.
[108:58] of violence can back that up. That's kind of core to the idea of a liberal democracy, right? I'm not arguing that it is a good system, but I'm arguing for the core logic of it. And it's because the recognition that if we just had a pure market system, the reason why there wasn't just a pure kind of laissez-faire system, even though the people building this understood, at least their expressed reason, is an impure type market dynamic, as you were mentioning with AI, some people are way better at than other people.
[109:28] And as a result, we'll just end up getting a lot more money that they can convert to more land, resources, employees, et cetera. And you end up getting a power law distribution on wealth, which is a power law distribution on everything. And these people's interests end up determining the whole society and these people's interests are pretty determined for them. And so if you want to create protections for these people at all, and that was basically the King George situation and the inspiration for the Declaration of Independence and leaving, which was there was too much concentrated power and it was kind of fucked. So how do we make that not happen?
[109:59] Well, since we know that the market is going to kind of naturally do that, let's create a state that is more powerful than any market actor. And let's make sure that the state reflects the values of all the people. So the little guys get to unify themselves through a vote, right?
[110:16] And then you get to have a representative that represents everybody. It's the only one given a monopoly of violence and it gets to make sure that any more powerful actors are checked. That's kind of the idea. Yeah. So, so far this is an account of how it's been like a history lesson, but you aren't saying this is how it should continue to be, nor this is how it's operating in its ideal sense currently. Are you just saying that this was the reasoning behind it?
[110:38] one key part of how it broke down. The idea is that the market, people will have incentives to do things that are good for them, that might suck for the environment or others. Others have the ability to agree upon laws that will bind those actors to not do that thing. The state is supposed to check the market, let the market do its thing, do resource distribution, productivity, let it do that because it's good, but check the particularly fucked applications.
[111:07] In order for the state to check the market, the people are supposed to check the state and ensure that the state is actually doing the thing that it's supposed to do and that the representatives aren't corrupt and taking back in deals and all those kinds of things. And then there's a way in which the market kind of checks the people, meaning that the people can't
[111:27] The accounting checks them. They can't vote themselves more rights than they're willing to take responsibility for. They can't make the economics of the whole situation not work. They can't vote themselves a bunch. If the people all say, yes, we should all get no taxes but lots of social services, then the accounting is what actually checks the people.
[111:55] So that's the idea of how you have this kind of self-stabilizing thing. But of course, the people stopped checking the market once we were out of kind of the sense of an eminent need for revolution, then the people have a lot of shit to do other than really pay attention to government in detail. And there's a bunch of other reasons beyond the scope of this conversation why the people stopped checking the government, in which case the market is continuously trying to influence the government through lobbying and legislation and campaign finance and all those other things.
[112:24] and so then you end up getting regulatory capture rather than regulatory effectiveness. So when you put those checks and balances, it's going to change. When everyone's scared of concentrated power following a revolution, it's different than four generations later where nobody actually feels that fear anymore and is busy doing other shit. So it's not just how do you build your system, it's how do you build a system
[112:49] where the initial people that went through the difficult thing to build it when they die, you didn't just pass on the system, but the generator function of the kinds of insights needed to keep updating and evolving the system under an evolving context. So when you ask the question about could such a thing be built at an international level where there are checks and balances, the answer is it's super hard. But yes,
[113:16] But it's not just can you design it properly upfront, it's also can you factor how that system then even if well intended at first, it's kind of like all technologies dual use. So you build the gene editing for, you know, oncology, but then it can be used for bioweapons. You have to not just think about what you're building it for, but all the things that will happen having created
[113:40] you're building it for right now, but as the landscape changes, culture changes, can this thing be corrupted? Can it be captured in future different contexts? And how do you build in immune systems to that?
[113:53] And that sort of thinking seems to be missing with the development of AI. And it reminds me, I've said this several times, like the development of the bomb where Feynman and Oppenheimer, mainly Feynman and his peers, said they didn't think about what they were creating. They were thinking, we're having fun speaking about these topics. It's even more fun to do research on these topics. Einstein said, like, I would burn my hands had I known that this was what was going to be developed. I wasn't thinking about that. I wasn't thinking about the consequences. And Feynman said something similar.
[114:19] we're consumed with the achieving of a goal and we're not thinking about what would occur as a consequence once we attain it and you hear this constantly in the ai scene channels like two minute papers that say what a time to be alive that's like his catchphrase what a time to be alive like encouraging and amazed constantly thinking oh what is going to be like two papers down the line said enthusiastically i see little caution expressed yes geez louise like what the heck are we building and should we just because we could should we
[114:46] When the people who express caution, this now relates to this asymmetry we said, if people are like, hey, this is extremely risky technology, we need to understand the risk space very deeply first. We need to ensure that the development of the technology and then its future use by everybody is safe enough to be worth built.
[115:15] Those people end up running nonprofits because there's no upside to that. There's no immediate capital upside to that. So they have a hard time getting the capital to get really good researchers or big enough computers and data sets to try to run stuff on for trials. And the people that are like, oh, there's a market application to this have a much easier time getting a massive GPU cluster and a lot of talent and a lot of data.
[115:38] and so we can see this if you name the names that are out there in their views and then map them to the types of organizations they run and the type of motivational or cognitive bias it somewhat maps. So what is
[116:00] This is what we intended to talk about. All of this has been interesting preface. What is the actual risk space with AI? What do we know? What do we not know? How should we think about it? How should we proceed, especially given that AI is a lot of different things? Should we dive in there now? Sure. I just wanted to point out that although this seems like a disagreement between you and I on the surface, there's an agreement. So again, I'm not a Mormon.
[116:26] But I don't see the Mormons failure because they go and they say, hey, you should whatever they say, act right, be humble, be kind, don't overconsume and so on. But then their religion doesn't grow. I don't see that as a failure of them, because maybe their religion is larger than just being Mormon. It's something about the values that are spread and then they
[116:45] send out these values and they filter through the community in the same way that these nonprofits just because they're not the largest doesn't mean that the values that they send out don't influence you and I and influence the people who are listening and then act differently because of these values. We have no idea how much the passivism of Tolstoy has influenced you hugging your father and your brother and the positive sentiment we have generally speaking in society toward decentralization.
[117:10] And it also reminds me of the Cassandras for people who don't know what a Cassandra is. It's someone who makes a prediction of it's a doomsayer. It's akin to a doomsayer. They're the opposite of a self-fulfilling belief. So self-fulfilling belief is one where you state it and you create the condition such that it becomes true. Whereas in the best case for the people who say the world is going to end well,
[117:30] Their success depends on them being self-sacrificing, depends on us then being able to repudiate them, let the world hasn't ended. We have no idea how much the doomsayers or the fear-mongers said something that made us straighten up and act right, just enough that it pushed us off of the brink and influenced us to make society live and so we then have this archetype of the cautious and contumacious false cravens of the past. It's just not clear that because they have died doesn't mean that they're unsuccessful. That's what I mean.
[117:58] Yeah, so there's a simple viewpoint. The idea that even though Greece didn't continue its empire relative to Rome, that its memes ended up influencing the whole Roman Empire. And so in some way,
[118:22] it one, or similar with Judaic ideas or whatever. One of the greatest examples of that that people talk about right now is Tibetan Buddhism. From the point of view of Tibet as a nation, the Tibetan people, the integrity of that wisdom tradition, it was radically destroyed.
[118:48] But in the process of the world seeing that and having some empathy engendered for it, even though it didn't protect Tibet, did that actually disseminate Buddhist ideals to the whole world radically faster? There's a similar conversation as so many people become interested in ayahuasca and plant medicines from indigenous cultures that the economic pressure of that is making
[119:15] What is remnant of those cultures get turned into tourism and ayahuasca production or shipibo production or whatever it is that on one hand it actually looks like a destructive act on those cultures and the other way it's are the mimetics of those now becoming dispersed throughout the dominant systems. That is a part of the consideration set that has to be considered. And now do we see, for instance, that
[119:44] particularly nonviolent groups, like say the Janes as an example of maximum nonviolence, that those memes do become decentralized and affect everybody.
[120:03] It's not a simple yes or no, right? There's a whole bunch of contextual applications. So when we look at, are there memes from Buddhism that have influenced the world? Yes. But are they the ones that are compatible with the motivation set of the world they influence? So you've got this like Buddhist techniques of mindfulness for capitalists in Silicon Valley to crush at a capitalism is a very weird version of the subset of the Buddhist stack, right?
[120:31] I remember when I first started seeing the popularization of mindfulness techniques in business so that people could focus better and crush into capitalism, how fucking hilarious that thing is, right? Because in some ways, you are distributing good ideas. In other ways, you're actually extracting from a whole cultural set the part that ends up being a service ingredient to another set. The topic of
[121:01] What makes it through? I mean, it's a complex topic. What I will say is the idea that a civilization, which is its superstructure, its worldview values, what is true, good, beautiful, what it's oriented to, what's the good life, its social structure, meaning its political economy and institutions, its
[121:26] formal incentives and deterrents and how it organizes collective agreement and its infrastructure, the physical tech stack that it mediates all this on together in a civilizational stack. One of those competing with other ones for scarce resource and dominance and all of them engaged in that particular competitive thing, I would argue no one actually gets to win that.
[121:54] That process of the competition of those relative to each other does actually create an overall global civilizational topology that self-terminates. But also no one trying to create a good long-term future can just lose at that game in the short term. So you can neither create the world you want by just losing at that game, nor by trying
[122:19] actually
[122:34] Boy, he wants to go out with this other girl who happens to have the same name as his mom. But anyway, she's like, I don't want to go out with you because you're just in love with your mom. Then he's like, No, and he just left his mom's home. He's like 40 years old or 35. He's like, No, no, no, it's the opposite. I'm leaving my mom for you. You're replacing my mom. And then she's like, No. And it's because he's still framing it in the same way. Anyway, we have to abandon the frame. So please, what does that look like and integrate AI into this answer?
[123:05] So we have not just tried to give the frame on AI risk yet and to incorporate that in the what is the long-term solution for civilization as a whole look like, let's actually just kind of do the AI risk part first and then we can bring back together. Let's try to frame how to think about
[123:33] AI risks, AI opportunities and potentials, including how AI can help solve other risks, which has to be factored. I will add as preface that I am not an AI developer. I don't have background in that. I'm not even an AI risk expert or specialist. I know you're in conversation.
[124:01] might have Yudkowsky and other people who really are Stuart Russell, Bostrom, all those guys would be great. Because of some maybe novel perspectives about thinking about risk and governance approaches to risk writ large, meta-crisis, that's the perspective that I'm taking into the AI topic.
[124:28] What is unique about AI risk relative to other risks? We were talking earlier about environmental risks and risks associated with large scale war and breakdown of human systems and synthetic bio and other things. If we look at other technologies that have the potential to do some catastrophic things like nuclear, it's very easy to see that nuclear weapons don't make better bio weapons.
[124:58] They don't make better cyber weapons. They don't even make better nuclear weapons directly. The same is true for biotechnology. It doesn't automatically make those other things. AI is pretty unique in that you can use AI to evolve the state of the art in nuclear technology, in delivery technology, in intelligence technology, in bio and cyber and literally all of them.
[125:26] It is unique in its omnipurpose potential in that way, because of course, all those other technologies were developed by human intelligence, human intelligence, agency, creativity, some unique faculties of human cognitive process. And so where all the other technologies are kind of the result of that human process, building a technology that is doing that human process, possibly much faster and on much more scale is obviously a unique kind of case, right?
[125:57] And so there's thinking about what type of risk does an AI system create on its own? But then there's thinking about how do AI systems affect all other categories of risk? We have to think about both of those.
[126:16] And then, in addition to the fact that the nukes don't automatically make better bioweapons, the nukes don't even automatically make more nukes, right? They're not pattern replicating. But to the degree that we actually get AI systems that not only can make all the other things better, but they can make better AI systems, and to the degree that there starts to be something like autonomy in that process, then the self-upgrading and omnipotential of all the other things
[126:45] It's also true that there's an exponential curve in the development of hardware that AI runs on, better GPUs and all of different kinds of computational capabilities. There's an exponential curve in IoT systems for capturing more data to train them on, exponentially more people and money going into the field.
[127:10] because of the way that shared knowledge systems work, the kind of exponential development in the software and cognitive architectures. So we're looking at the intersection of multiple exponential curves, not just a single one. That is also kind of important and unique to understand about the space. So thinking about the case of AI turning into AGI, an autonomous artificial intelligence system,
[127:39] We can no longer pull the plug on that has goals, has objective functions, whatever they happen to be. That is something that guys like Bostrom and Udkowski have done a very good job of describing why that's a very risky thing. I think everybody at this point probably has a decent sense of it, but just make it very quick.
[127:59] When we say a narrow AI system, we mean something that is trained to be good at a very specific domain, like beating people at chess or beating them at Go or being able to summarize a large body of text. When we say general intelligence, we mean something that could maybe do all of those things.
[128:19] and can figure out how to be better than humans at new domains it has not been trained on through some kind of abstraction or lateral application of what it already knows. So if you put us into an environment where we have to figure out what is even the adaptive thing to do, we will do it. There's a certain kind of general intelligence that we have. So when we talk about a generally intelligent, artificial intelligence,
[128:45] And then, of course, because we can develop AI systems, one of the things it could do is develop AI systems. So if it has more cognitive capability than us in some ways, it can develop a better AI system and that one could recursively develop a better one. And you get this kind of thought about recursive takeoff in the power of an AI system. And there are conversations about whether that would be slow or fast.
[129:11] Is there an upper boundary on how intelligent a system could be and humans are near the top of that? Or are we barely scratching at the beginning of it and we could have something millions of times smarter than us? So that's all kind of part of that conversation. But the idea that we could create a artificial intelligence that could basically beat us at all games, which it could think about economy and affecting public opinion,
[129:39] and military as games and it has faster feedback loops, faster go to loops to get better than we do. So if we're like trying to deal with it, it's going to win at newly defined games. And if that thing, we can't pull the plug on and it can anticipate our movements and beat us at all games. If it has goals that are directly antithetical to ours or not even directly antithetical, but
[130:06] the way in which it fulfills its goals might involve externalizing harm to things that are part of our goal set, that's bad for us, right? So the idea of don't let that thing happen prevents getting to an unaligned AGI, that's that particular category of risk. And so there are arguments around could an AGI like that, is it even possible? That's one question. If it is possible to have such a thing,
[130:35] Is it possible to align it with human interests? What would that take? If it is possible to align it, is it possible to know ahead of time that the system you have will be aligned and will stay aligned? Those are all some of the questions in the space. Then do our current trajectories of AI research, like transformer tech or just neural networks or deep learning in general, do these converge on general intelligence?
[131:04] If so, in what time period? Those are all some of the questions regarding the AGI risk space. Now, I want to talk about that risk, but I want to talk about other risks using that as an example in the space. Any questions or thoughts on that one to begin with?
[131:25] Sure. Number one is that we may already have artificial intelligence in a baby form. Like we have hugging face. I don't know if you know what that is. And then there's the paper of sparks of artificial general intelligence. And that's distinguished from something that updates itself. I just want to make that clear. So there's a lot of questions regarding does it have to be better than humans at everything to be an existential risk? We could imagine we could imagine a
[131:55] Von Neumann machine that was self-replicating and self-evolving that was not better at everything, but better at turning what was around it into more of itself and evolving its ability to do so and just having way faster feedback loops. And we could imagine that becoming an existential risk with a speed of a particular type of intelligence that does not mean better than us at everything. Yeah, that's a great point. Like an asteroid is not better than us at almost anything, but it can destroy us. And it
[132:23] Yeah, it's not doing it through some kind of process that involves learning or navigating competitions at all. It's just kinetic impact. This would be a kind of intelligence, but it could be one that's a lot more like a very bad pandemic and the intelligence of a pathogen than the intelligence of a god. So talking about if the
[132:49] What type of intelligence it would need? Is it generally intelligent? Is it autonomous? Is it agentic? Is it self-upgrading? They're related concepts, but they're not identical concepts. Let's go ahead and put the category of AGI risk as one topic in the AI risk space.
[133:17] Let's come to a much nearer term set of things, which is AI empowering bad actors. And we can talk about what of that is possible with the existing technology. What of that is possible with impending technology that we're for sure going to get on the current course versus things where we don't know how long it's going to take or even if we'll get there.
[133:46] So with regard to AI empowering bad actors, we could say how undefined the bad actor is, obviously, because one person's freedom fighter is another person's terrorist.
[134:00] So we can imagine someone who is terrified about environmental collapse deciding to become an eco terrorist being a maximally good actor in their worldview, but saying that the only answer is to start taking out massive chunks of the civilizational system that's destroying the environment.
[134:19] So I'm simply saying that I'm not being simplistic about what we mean by bad actor, but oriented to from whatever motivational type, whether it was pure sadism, whether it's nihilistic, burn it all down, or whether it's well motivated, but maybe misguided considerations. But AI for some destructive purpose. So now, this is something we have to address first.
[134:49] One thing I have found when people think about how significant AI risk will be and how significant AI upside will be. First on AI upside, it's just important because if we talk about risk and we don't talk about upside, it will be easy for a lot of people to say, oh, this is a techno pessimist Luddite perspective and kind of dismiss it at that. So I would like to say there is a
[135:18] The upsides of AI, the best case examples that everyone is interested in, everyone is interested in, they're awesome. All the things that we care about, that we use intelligence to figure out, where intelligence is rate limiting, figure them out, rate limiting to figure out the rest of the problems, could we use it to solve those problems?
[135:37] So could AI make major breakthroughs in cancer and immuno-oncology? And does anyone who's talking about slowing down AI, are they factoring all the kids that are dying of cancer right now? And if we could speed that thing up, could we affect that in our, like, that's a very personal, very real thing, right? So AI applied to curing all kinds of diseases and
[136:03] and AI applied to psychiatric diseases and scientific breakthroughs and maybe resource optimization issues that help the environment and maybe the ability to help with coordination challenges if applied in certain ways. The positive applications, the kind of customized AI tutoring that could provide Marcus Aurelius level education where the best tutors of all of Rome were personally tutoring him in every topic,
[136:32] could provide something better than that to every human, right? Could democratize aristocratic tutoring. There was Eric Hohl's, our essays on aristocratic tutoring are really good. You should bring them on here, but basically it was something many people have come to, which is that the great polymaths and super geniuses, the highest statistical correlator that pops out is that they all had something like aristocratic tutoring when they were young, or the vast majority of them.
[136:59] that even von Neumann and Einstein had mathematicians as governesses before they went to school. Terry Tau had Paul Erdos. There's this famous image of, I don't know who had Ed Whitten though. Many of the people simply had parents that were very actively involved, scientists, philosophers, thinkers.
[137:24] But if you think about why Marcus Aurelius dedicated the whole first chapter of meditations to his tutors, and if you think about how the Dalai Lama was conditioned, where you find this three-old boy and have the top lamas and all of Tibet tutor them on everything that is the whole canon of knowledge, of course, if that was applied to everybody, we'd have a very different world. This is a very interesting insight because it says that the upper boundaries on
[137:50] that a lot of what we call human nature, because it's ubiquitous, it's not nature, it's nature through ubiquitous conditioning, that the edge cases on human behavior show conditioning in common, and that if you could make that kind of conditioning ubiquitous, you would actually change the human condition pretty profoundly. But as we move from feudalism to democracy and wanted to kind of get rid of all the dreadful unequal aspects of feudalism, looking at the fact that
[138:17] Like you can't learn to be a world-class mathematician by a person who's not a world-class mathematician the same way you can by one who is and you can't you don't get a bunch of world-class mathematicians becoming third grade or eighth grade high school teachers you know or school teachers so how would you do that so it's kind of repugnant from a privileged lack of democratized capability point of view right and yet could you have could I make
[138:42] LLM-trained AIs and better than LLM ones, where I can have von Neumann and Einstein and Gödel all in a conversation with me about formal logic, where they are not only representing their ideas, but maybe even now have access to all the ideas since then, and are pedagogically regulating themselves to my learning style. That's kind of amazing, right? And could they maybe be doing that based on also
[139:12] psychological development theories. A colleague of mine, Zach Stein, has been working on this a lot of how to be evolving not just their cognitive capacity, but their psychosocial, moral, aesthetic, ethical, et cetera, full suite of human capacities. So I'm simply saying AI applied rightly. There's a lot of things to be excited and optimistic about in
[139:39] So that's a given and we could do a whole long conversation on more of those examples. When I look at how people orient to the topic of AI risk, one of the things that seems to be a common kind of where their knee-jerk reactions before understanding all the arguments pro and con well comes
[140:08] is how much they have a bias towards a kind of techno-optimism or techno-pessimism, kind of a where Pinker, Hans Rosling, there are still problems, but they're getting better. The world is getting progressively better and it's a result of things like capitalism and technology and science and progress. And so more of that will just keep equaling better. And yes, there will be problems, but they're worth it, right? Versus, so that's
[140:37] I would call that techno-capital optimism, but the naive version that doesn't look at the cost of that thing, we would call a naive progress dialectic. In the dialectic of progress is good, progress is not good, or we're really making progress versus we're actually losing critical things or causing harm or whatever. In that dialectic, that's on the progress side, but a naive version of it.
[141:09] And so most of the, and just to address that briefly, so the naive progress story looks at all the things that have gotten better. And you can see this in lots of good books and Pinker's books, Diamandis's books, Rosling's talks, on and on. And then the extension of that into the future Diamandis start to do, and you could say Kurzweil is kind of an extension of that far out.
[141:39] Why naive is if it doesn't look at what is lost in that process and what is harmed in that process, as well as the increase in the types of risk that are happening.
[141:52] And so I would argue that most of those things in every kind of Rosalink presentation is cherry picking its data out of a humongous set to make a cherry picked argument. This is one of the reasons that fact checking is not enough is because you can cherry pick your facts. You can frame them in a particular way and create a conclusion that the totality of knowledge wouldn't support at all because of that process, right?
[142:16] I would say there is a naive technopessimism or Luddite direction that looks at the real harms tech causes culturally, socially, environmentally, or other things, and wants back to the land movement and organic, natural, traditional, whatever, various types of, and if it is not paying attention to the types of benefit that are legitimate,
[142:41] That's naive, but also if it's not paying attention to the fact that that worldview will simply, as we talked about before, not forward itself, because the one that advances more tech will develop more power and end up becoming the dominant world system. That also means that it's not actually having a worldview that can't orient towards shaping the world. So we have to, so putting those together, I would say
[143:07] All of the things that the techno-optimists say tech has made better and all of us like the world we're going to the dentist involves Novocaine versus not Novocaine and where we have painkillers and where we have antibiotics under infection and stuff like that. All of the things that tech has made better have not come for free. There have been externalized costs and the cumulative effect of all of those costs is really, really significant.
[143:35] And so if you look at the progress narrative, the indigenous people that were genocided don't see it as a progress narrative. The fact that there's more biomass of animals in factory farms than there is in the wild today does not see that as a sign of progress. The animals that live in factory farms or all the species that are extinct don't see it as progress.
[144:00] the fact that we have many, many different possibilities of destroying the life support capacity of the planet relative to any previous time, or that almost no teen girl growing up in the industrialized world doesn't have body dysmorphia, where that was not an ubiquitous thing. There's a lot of things where you can say, damn, those technologies upregulated some things and externalize costs somewhere else. If you factor the totality of that, then you
[144:31] can say okay there are a lot of positive examples any new type of tech can have but there's also a lot of externalities and harms it can have and we want to see how to get more of the upsides with less of the downsides and that can't be a rush forward process right that actually requires a lot of thinking about how to do that so i'm actually i actually am a techno optimist in a way meaning i i do see a future that
[145:00] is high nature stewardship, high touch, and is also high tech. High touch? Yeah, meaning that the tech does not move us into being disembodied heads mediating exclusively through a digital world. So I would argue that
[145:30] your online relationships don't do everything that offline relationships do. They do some additional things like distance and network dynamics, whatever. But if they debase your embodied relationships, they're causing a harm. If the online relationships improve your embodied relationships, not just the create online relationships and debase them, then that's a different thing, right? So that's what I mean by high touch.
[146:00] So I want to say that naive techno-optimism is, if we look at the history of corporations that are developing technology, market technology advancement focused on the upside and not terribly focused on the downside, we look at four out of five doctors choose Camel cigarettes. We look at better living through chemistry, providing
[146:29] DDT and parathion and malathion, we look at adding lead to gasoline in a way that took a toxic chemical that was bound in ore underneath the biosphere and sprayed it into the atmosphere ubiquitously and dropped about a billion IQ points off the planet, made everybody more violent in terms of its neurotoxicology effects.
[146:52] Trusting the groups that are making the upside on moving the thing to figure out the risks historically is not a very good idea. I'm mentioning that in terms of now trusting the AI groups to do their own risk assessment. If you think about the totality of risks,
[147:10] Well, then you want to say, how do we move the positive applications of this technology forward in a way that mitigates the really negative applications of it? That if one wants to be a techno optimist responsibly, they have to be thinking about that well. So what about the AI companies that say we do third party testing for safety? Do you still consider that somehow internal because they're the ones going out? Depends. If so. When I.
[147:39] early in the process of getting into risk assessment. I had times where corporations asked me to come do risk assessment on a technology or process and then when I did an honest risk assessment they were not happy because what they wanted me to do was some kind of box checking exercise that wouldn't cost them very much and wouldn't limit what they were going to do so they had plausible deniability to say they had done the thing and move forward quickly because what they didn't want was for me to say actually there is no way for you to pursue the market viability of this that does not
[148:08] Yeah, totally. I have seen this in the example of something like a
[148:34] mining technology or a new type of packaging technology that is wanting to say why it's doing something that addresses some of the environmental concerns. It addresses the environmental concerns it identified. We identify a bunch of other ones that it doesn't address well, that it moves some of the harm from this area to the other one. That's an example of where some of the problem would come.
[148:56] But I find this is just as bad in the nonprofit space or in the government space as well, not just in the for-profit space, because they also have, even if it's not a profit motive, they have an institutional mandate and their institutional mandate is narrow. They can advance that narrow thing. This is now the same thing as an AI objective, right? If the AI has an objective function to optimize X, whatever X is, or optimize a weighted function of XYZ and metrics,
[149:23] everything that falls outside of that set, harm can be externalized to that and achieve its objective function. So I remember talking to groups, UN-associated groups, they were working on world hunger and their particular solutions involved bringing conventional agriculture to areas in the world that didn't have it, which meant all of the pesticides, herbicides and nitrogen fertilizers. And it was a huge increase in nitrogen fertilizer by a bunch of river deltas
[149:52] where it currently wasn't that would increase dead zones and oceans from that nitrogen affluent that would affect the fisheries in those areas and everything else and the total biodiversity. And when I brought it up to them, they're like, oh, I guess that's true. But those are not the metrics we're tasked with. You know, we're tasked with how many people get fed this year, not how much the environment is ruined in the process.
[150:16] And so the reduction of the totality of an interconnected world to a finite set of metrics we're going to optimize for, whether it's one metric called net profit or GDP, or it's the metric of whatever the institution is tasked with or getting elected or something like that, it is entirely possible to advance that metric at the cost of other ones. And then it's entirely possible that
[150:45] other groups who see that create counter responses to that who do the same thing in opposite directions and the totality of human behavior optimizing narrow metrics while both driving arms races and externalizing metrics in wide areas is at the heart of the coordination failures we face and so it happens to be that this is already something that we see with humans outside of AI but giving an AI an objective function
[151:14] is the same type of issue. I was mentioning examples in the nonprofit space. I think there are examples of how to do AI safety that can also be dangerous. Somebody proposes an idea like, here's a type of AI that
[151:43] could be good and we should build it, or on the other side, here's a AI safety protocol that would be good and we should instantiate it in regulation or whatever. We want to red team those ideas, meaning see how they break or fail, and violet team them, meaning see how they externalize harm somewhere else that they didn't intend before implementing them, which just means think through the causal set beyond the obvious set you're intending it for.
[152:12] There was this call for a six-month pause on training large language models bigger than GPT-4. I'm not saying that a pause is a bad idea. I'm saying as instantiated, it's not implementable and it's not obviously good. You saw the pushback as people were like, all right, so that means
[152:40] that whatever actors are not included in this, which might mean bad actors, rush ahead relatives. That's a real consideration. And one has to say, okay, so are we stopping the accumulation of larger GPU clusters during that time? Are we stopping the development of larger access to larger data sets during that time that we'll be able to quickly configure them? Are we also stopping, you know,
[153:09] There are plenty of other types of AI that are not LLMs being deployed to the public, but that are very powerful. Black Rocks Aladdin played some role in the fact that it has more assets under management than the GDP of the United States. There are military applications in development. Can you
[153:33] What is the actual risk space and are we talking about slowing the whole thing or are we talking about slowing some parts relative to other parts where these kind of game theoretic questions emerge? How would we ensure that the whole space was slowing? How would we enforce that? Those are all things that have to be considered. You mentioned that GDP is not a great indicator and
[153:54] GDP goes up with war and more military manufacturing. It goes up with increased consumerism and the cost to the environment. It goes up with addiction. Addiction is great for lifetime value of a customer. So there's something called good hearts law. So I'm sure you're familiar with good. Is this at the core of what you're saying? It's like, hey, it's hard to explain it.
[154:16] As soon as you have a metric that you try to optimize for, it ceases to become a good metric. For instance, I think this is from The Simpsons, but it may be real, that there is a town overrun by snakes or rats. I think it was rats. And then you say, hey, give me rat tails because it implies that you killed a rat. I'll give you a dollar every time you bring me a rat tail and we'll reduce the amount of rats. Maybe it initially did so, but then people realized I can farm rats and then just kill them and give you tails. And thus I have more total rats. This is a much more general phenomenon. So perhaps if you think Twitter follows
[154:46] There are perverse forms of fulfilling that metric, meaning there's a way to fulfill that metric that either no longer provides the good or it provides the good while also affecting some other bats, which basically means you probably thought of that metric in a specific context. There's a bunch of wild rats and the only way to get a rat tail is to kill a wild rat, not in the context of farmed rats.
[155:12] And so it kind of relates to the topic we were mentioning earlier about government that it's not just instantiating a government that makes sense on the current landscape, but recognizing the landscape is going to keep changing and it'll change in a way that has an incentive to figure out how to control the regulatory systems and how to game the metric systems.
[155:36] So with regard to the topic of AI alignment, right, because if we tell the AI maximize the number of rat tails, then it could like Bostrom's paperclip maximizer. Before we continue, it's imperative that we have a brief overview of Bostrom's thought experiment called the paperclip maximizer.
[155:56] The paperclip maximizer scenario, initially conceived by philosopher Nick Bostrom in 2003, illustrates the potential dangers associated with misaligned goals of artificial general intelligence, that is, AGI, agents.
[156:10] In this hypothetical scenario, an AGI is tasked with the seemingly innocuous goal of maximizing the number of paperclips it produces. However, rather than competence and focus serving as a salutary quality, it's in fact due to its extreme competence and single-minded focus that it proceeds to transform the entire planet and eventually the universe into paperclips, annihilating humanity and all of life in the process.
[156:35] The core ideas to understand from this scenario are the importance of value alignment, the orthogonality thesis, and instrumental convergence.
[156:44] Value alignment is the process of ensuring that an AGI shares our values and goals in order to prevent cataclysmic outcomes, such as the aforementioned paperclip maximizer. The orthogonality thesis states that intelligence and goals can be independent, implying that a highly intelligent AGI can have arbitrary goals. You hear this, by the way, when people say that we've become more knowledgeable with time, yet our ancestors were wiser.
[157:06] Instrumental convergence refers to the phenomenon where diverse goals lead to similar instrumental behaviors like resource acquisition and self-preservation. For instance, as Marvin Minsky points out, both goals of prove the Riemann hypothesis and make paper clips may result in all of the Earth's resources being dismantled, disintegrated in an effort to accomplish these goals.
[157:33] Thus, despite the ultimate goal being different, for instance, the Riemann hypothesis and make paper clips are not the same, there's a convergence along the way. What's often overlooked in AGI development is something called the value loading problem. This refers to the difficulty of encoding our moral and ethical principles into a machine. That is, how do you load the values? Keep in mind that AGI needs to be corrigible and robust to distributional shifts.
[157:57] AGI, or even baby AGI, needs to maintain its alignment even when encountering situations deviating from its training data. Additionally, something that we want is that the AGI should be able to recognize ambiguity in its objectives and seek clarification rather than optimizing based on flawed interpretations. Of course, we as people have ambiguity and flawed interpretations. The difference is that AGI could decidedly exacerbate our own existing known and unknown flawed nature.
[158:26] Another difference is that we can't replicate on a second to second or millisecond to second basis. At least not yet. One promising approach to this AGI alignment scenario or misalignment scenario is something called reward modeling. This involves estimating a reward function based upon observing our preferences rather than us providing predefined objectives. And some of the more hilarious examples I found of the predefined sort are as follows.
[158:53] The aircraft landing problem was explicated in 1998 when Feldet attempted to evolve an algorithm for landing aircraft using genetic programming. The evolved algorithm exploited some overflow errors in the physics simulator, creating extreme forces that were estimated to be zero because of the error, resulting in a perfect score without actually solving the problem that it was intended to solve.
[159:15] Another example is the case of the Roomba. In a tweet, Custard Smigley described connecting a neural network to this Roomba to navigate without bumping into objects. The reward scheme encouraged speed and discouraged hitting the bumper sensors. Okay, so think about it. What could happen? Well, the Roomba learned to drive backward. There are no sensors in the back, so just went about bumping frequently and merrily. In a more recent example, a reinforcement learning agent was trained to play the video game Road Runner.
[159:43] It was penalized for losing in level 2, so did it just become fantastic at the game? Not quite. The agent discovered that it could kill itself at the end of level 1 to avoid losing in level 2, thus exploiting the reward system without actually improving its performance. What would happen if it was tasked to keep us from hitting some tipping point? By the way, is living more valuable than not living? What's the rational answer to this? This is perhaps the most important fundamental question.
[160:16] With regard to the topic of AI alignment, because if we tell the AI maximize the number of rat tails, then it could, like Bostrom's paperclip maximizer, start clear cutting forests to grow massive factory farms of rats and whatever. You can do the reducto ad absurdum of a very powerful system.
[160:44] So then the question is, you say, okay, well, do the rat tails or the GDP or whatever it is while also factoring this other metric. Okay, well, you can do those two metrics and there's still something armed. What about these three? The question is, is there a finite describable set of things that is adequate for something that can do optimization that powerfully, right? That is a way of thinking. And so it's, is there a finitely describable definition of good?
[161:14] is another way of thinking about it, right? Or in terms of optimization theory. Yes, that's something I think about the misalignment problem. Is it in principle impossible to make the explicit what's implicit? When we state a goal, it carries with it manifold unstated assumptions. For instance, I say bring me coffee or bring me Uber food. We imply indirectly don't run over a pedestrian to bring me the Uber food. Don't take it from the kitchen prior to it being packaged. Don't break through my door to give it to me.
[161:43] And we cloak all of that and say, that's just common sense. Common sense is extremely difficult to make explicit. Even object recognition is extremely difficult. And then as soon as we can get a robot to do something that is human-like, then it becomes more and more black box-like. And then you have this huge problem of interpretability of AI. So it is an extremely difficult problem. And I wonder how much of the misalignment problem is just that? Is it just the fact that we can't make explicit what's implicit?
[162:09] and we overvalue how much the explicit matters and implicit is far more complex. I don't know. This is just something that I'm putting out there and asking. In other words, to relate this to what you were saying is, is it finite? And even if it's finite, is it like a tractable amount of finiteness that either we can handle it or we can design an AI that we feel like we have a handle over that can understand it? Yeah. And if you try to say, okay, can I
[162:36] mine myself, my brain, for all the implicit assumptions and put them all. I think every version of the thought experiment you realize you can't. But even if you do, that's only the ones that are associated with the kinds of context you've been exposed to so far. But there are a heap of things that nobody has ever done that maybe an AI could do that now also have to be factored in there that you didn't
[163:05] There's also something that because humans all
[163:19] co-evolved and have similar-ish nervous systems and all kind of need to breathe oxygen and want to universe a world that has similar physics and whatever. There's some stuff where the implicit processing is kind of baked into the evolutionary process that brought us that is not true for a silica-based system. It has totally different, it is not subject to the same physical constraints, right? It could optimize itself in a very different physical environment.
[163:44] And so even the thing that we would call just kind of an intuitive thing is very different for a very different type of system. So I would say when it comes to the topic of AGI alignment, there are different positions on alignment. I would say the strongest position
[164:09] is AGI alignment is not well, first we actually have to discuss what we even mean by alignment, right? Because initially the topic of alignment means, can we ensure that the AI is aligned with human values and human intentions? So that when you say, bring me a cup of coffee, that you're all those implicit intentions that you have are not damaged in the process. But if we look at the
[164:36] all of the animals and factory farms and the extinct species and the disruption to the environment and the conflict between humans and other humans and class subjugation and all those things, you can say human intent is not unproblematic. And exponentiating human intent as is, is not actually an awesome solution.
[164:58] And so do you want it to be aligned with human intention? Well, it currently looks like human intention has created a social sphere and a technosphere that is fundamentally misaligned with the biosphere they depend upon. And it is the technosphere social sphere complex is kind of autopoetically scaling while debasing the substrate it depends upon. In other words, it's on a self-termination path.
[165:27] So and that represents something like the collective intent of humans currently in this context. So if you ensure that the AI is is aligned with intent in the narrow obvious definition, that is also not a good definition of alignment. So insofar as the humans are not aligned, their intent is not aligned with the biosphere they depend upon and is not aligned with the well-being of other humans who will produce counter responses.
[165:56] and most of the time isn't even aligned with their own future good, as is the case with all addictive behavior, right? Sorry to interrupt. I'm so sorry. Is this a place where you disagree with Jadkowski or has he also expressed points that are in alignment with your point about alignment? I don't know if he has. There's nothing that I know of that I disagree with. I think when he's, uh,
[166:21] I'm sure he's thought about this. I just haven't read that of him. When he's talking about alignment, he's talking about this more basic issue of, as he tries to give the example, if you have a very powerful AI and you ask it to do something that would be very hard for us to do, but should be a tractable task for it, like replicate the strawberry at a cellular level, that can you make an AI that could do that, that doesn't destroy the world in the process?
[166:51] Even that level, not being clear how to do at all, is the thing he's generally focused on. I'm sure he has deeper arguments beyond it that if we got that thing down, what else would we have to get? If we look at all of the social media issues that the social dilemma addressed, where
[167:19] You can say, Facebook can say we're giving people what they want or TikTok or the YouTube algorithm or Instagram or whatever, because we're not forcing people to use it. Except it's saying we're giving people what we want in the same way that the drug dealers that gives drugs to kids is saying that, which is we can create addiction. We can prey on
[167:39] the lower angels of people's nature and if they're individual people who don't even know they're in such a competition and we're talking about a major fraction of a trillion dollar organization employing supercomputers and AI in an asymmetric warfare against them to say we are giving them what they want while engineering what they want that's you know it's a tricky proposition but we can see how the algorithm that optimizes for in whether it's time on site or engagement both have
[168:09] happened. That's a perverse metric because you can get it through driving addiction and driving tribalism and driving fear and limbic hijacks and all those things. What's important to acknowledge, that's already an AI. It's already a type of artificial intelligence that is collecting personal data about me and then looking at the totality of content that it can draw from and being able to create a newsfeed for me
[168:37] that continues to learn based on what is stickiest for me, what I engage with the most. Now, in this case, the AI is creating the content. It's just choosing which content gets put in front of people. In doing so, it is now incentivizing all content creators to create the content that does best on those algorithms. So it's actually, in a way, farming all human content creators
[169:05] Because it's incentivizing them to do whatever it is that is within the algorithm's bidding. Now, as soon as we have synthetic media, which is rapidly emerging, where we can not just have humans creating whatever the TikTok video is, but we can have deep fake versions of them that are being created very rapidly.
[169:23] And now you have a curation AI where that first one's AI was just to curate the stickiest stuff personalized to people and creation AIs that can be creating multiple things to split test relative to each other and the feedback loop between those, you can just see that the problem that has been there just hypertrophies. Let alone the breakdown of the epistemic comment and the ability to tell what is real and not real and all those types of issues.
[169:53] I want to come back to where we were. So obviously the curation algorithm is saying that it's aligned with human intent, but not really, right? It's aligned with human intent because it's giving stuff that they empirically like because they're engaging with it. But most people then actually end up having regret of how much time they spend on those platforms and wish that they did less of it. And they don't plan in the day, I want to spend this much time doom scrolling. And so is it
[170:22] Is it really aligned with their intent? And in general, is aligning with intent that includes the lowest angels of people's nature type intense? Is that a good thing? Is that a good type of alignment when you factor the totality of effects it has? So we could say that the solution to the algorithm issue should be that
[170:49] Because the social media platform is gathering personal data about me and it's gathering based on its ability to model my psyche based on all of whom my friends are and what I like and what I don't like and all those things and my mouse hover patterns. It has an amount of data about me that can model my future behavior better than a lawyer or a psychotherapist or anybody else could.
[171:14] So there are provisions in law of privileged information. If you have privileged information, what are you allowed to do with it? And there are provisions in law about undue influence.
[171:27] The platforms are gathering privileged information, that they have undue influence. As a result, there should be a fiduciary responsibility. This is one of the things that we do when there's a radical asymmetry of power. Because if there's a symmetry of power, we say caveat emptor, buyer beware. It's kind of on you to make sure that you don't get sold a shitty thing or engage with. But if there's a radical asymmetry of power, can you tell the kid buyer beware about an adult that is playing them? No, you can't, right?
[171:56] And so in that way, can the person who isn't a doctor know that they really don't need a kidney transplant if the doctor tells them that they do because the doctor gets paid when they give kidney transplants? Well, that's so bad. We don't want that to happen. We make law saying doctors can't do that. There's a Hippocratic oath to act not just in their own economic interest, but in they are an agent on behalf of the principal because the principal cannot buy or beware, right?
[172:24] And so then there is a board of other doctors who are also at that upper asymmetry who can verify did the person do malpractice or not. Same with the lawyer. If the lawyer wanted to just bill by the 15 minute sections maximally to drain as much money from me, they could because there's no way I can know that what they're telling me about law is wrong because they have so much asymmetric knowledge relative to me that we have to make that illegal. We have to make sure that the lawyer is a
[172:52] agent on behalf of me as the principal. So with lawyers and doctors and therapists and financial advisors, we have this fiduciary principal agent binding thing. And it's because there's such an asymmetry that there cannot be self-protection. If I'm engaging with them and giving them this privileged information and they wanted to fuck me, they could. But for my own well-being, I have to engage with them and give
[173:20] have some legal way of binding that. But of course, in the case to bind it where the lawyers all have some practice law that they can be bound by, they can be shown they did malpractice, and same with doctors, that requires a legal body of lawyers or a body of doctors that can assess if what that doctor or lawyer did was wrong. So somebody else that has even higher asymmetry, right, the group of the top thinkers, this becomes very hard when it comes to AI.
[173:48] So let's start by saying rather than the AI being a rivalrous relationship with me when I'm on social media and it is actually gathering the information about me not to optimize my well-being but to optimize ad sales for the corporation that is the platform and the corporations that are its actual customers. In which case it has the incentive to
[174:17] prey on the lowest angels of my nature and then be able to say it was my intent and I had free choice. So we could say that should be a violation of the principal agent issue. And because there's undue influence, we can show there's undue influence. Consilience Project, we wrote some articles on this. There's one on undue influence that makes this argument more deeply. And
[174:42] Because you can show it's gathering privileged information, it should be in fiduciary relationship where it has to pay attention to my goals and optimize aligned with my goals rather than I'm the product and it's optimizing with the goals of the corporation or its customers, right? In order to do that, that would change its business model. It couldn't have an ad model anymore. I would either have to pay a monthly fee for it or the state or some commons would have to pay for it and everybody had access to it or some other things.
[175:11] That seems like a very good step in the right direction. And that is an alignment issue thing, right? The principal agent issues a way of trying to solve the alignment, which is to say that this more powerful AI system here, the curatorial AI, social media, would be aligned with my interest in bound in some way. And maybe we would extend that to all types of AI. Well, of course, in the AGI case where it becomes fully autonomous,
[175:41] and becomes more powerful than any other systems. What other system can check it to see if what it is doing is actually aligned or not? There isn't a group of lawyers. They can check that lawyer, right? So that becomes a big issue. And if it really becomes autonomous, as opposed to empowering a corporation, which is run by humans, it's different. And
[176:05] So this is one part on the topic of alignment and alignment with our intention or well-being. You can do superficial alignment with our intention, which the social media thing already does, but it's not aligned with our actual well-being because an asymmetric agent is capable of exploiting your sense of intentionality. And similarly,
[176:29] When you say there's a common sense that says, don't bring the door down, you're bringing me coffee, there should be a common sense that says, don't over fish the entire ocean and cut all the damn trees down and turn them into forests in the process of growing GDP and there is clearly not. Right?
[176:47] And so we can see that the current, without AI, human world system already actually doesn't have that kind of check and balance in it in all the areas that it should just so long as the harms are externalized somewhere far enough that we don't instantly notice them and change them. So the question of what do we, if we have radically more powerful optimizer than we already have, what do we align its goal with?
[177:15] If we just say align it with our intention, but it can change our intention because it can behavior mod me and we can't possibly deal with that because of the asymmetry, that's no good as in the Facebook case. If we try to align it with the interest of a nation state that can drive arms races with other ones and other nation states in war or drive it, align it with the current economy that's misaligned with the biosphere, that's not good. So the topic of alignment is actually an incredibly deep topic.
[177:44] And this now gets to what you've probably addressed on your show in other places. It gets to a very philosophic issue, which is the kind of is-ought issue, which is science can say what is, it can't say what ought, right? And that kind of distinction by Mill and others historically, and that the applied side of science as technology and engineering can change what is, but what ought to be, what is the ethics that is somehow compatible with
[178:14] science is a challenge. The best answer we have had, arguably that came from the mind that created both a lot of our nuclear technology and our foundations of AI, von Neumann was game theory, right? That the idea that is good is the idea that doesn't lose. And we can arguably say that that thing instantiated by markets
[178:41] and national security protocols has actually been the dominant definition of ought that ends up driving the power of technology. Because if science says, we can't say what ought, we can't. We can only say what is, but we're really fucking powerful at saying what is in a way that reduces to technology that changes what is where we can optimize some metrics and say it's good, even if we externalize a lot of harm to other metrics or optimize in groups at the expense of outgroups or whatever it is, right?
[179:09] But we say that not only do we not have an ought, but that any system of ought is not the philosophy of science. So is, insofar as that's concerned, out of scope or gibberish, well, then what ends up guiding the power of technology? Markets do, and to some extent national security does. In other words, rival risk and theoretic kind of interests. And so what gets researched, the thing that has the most market potential?
[179:38] And so then again it is, what is actually developing the technology? Because as Einstein said, like, I was developing science not knowing it was going to do that application, didn't want that application, wanted sciences for social responsibility. But what ends up, for the most part, the research that gets funded is R&D towards something.
[180:02] that ends up either advancing the interest of a nation state or the interest of a corporation or whatever the game theoretic metric set of the group of people that is doing the thing.
[180:17] And so what I would say is that as we get to more and more powerful is, more and more powerful science that creates more and more powerful tech and exponentially powerful tech, especially as we're already hitting fragility of the planetary systems. And when we say more powerful, we mean like exponentially more powerful, not iteratively more powerful. You have to have a system of auth powerful enough to guide, bind and direct it. Because if you don't, it is powerful enough to in whatever it is optimizing for,
[180:48] destroy enough that what it optimizes for doesn't matter anymore. Now, this is a fundamentally deep metaphysical philosophical issue. And of course, when we talk about regulation, law, the basis of law is jurisprudence, right? And is ought questions, right? Applied ethics that get institutionalized for exactly this reason.
[181:16] And so when we say if we have tech that is powerful enough to do pretty much fucking anything, what should be guiding that and what should be binding it? And if we don't answer those well, what is the default of what we'll be guiding as and binding it currently? And what does that world look like? So this is super cheerful conversation. What is the call to action?
[181:44] Okay, we're quite a far ways away from that. Let me try to expedite a couple other parts. When we were mentioning the AI risk, we said AI empowering bad actors. So you can think about whether a bad actor is a domestic terrorist who the best thing they can do right now is get an AR-15 and shoot up a transformer station to take down the power lines. AR-15 is a kind of tech that has some capability
[182:13] getting a sense of being disenfranchised by the current system and being motivated to utilize what is at their resources to do something about it. The barrier of entry of the more powerful tech is getting lowered. You can put those things together.
[182:37] But whether you're talking about domestic terrorism like that, or you're talking about international terrorism from larger groups, or you're talking about full military applications. But let's just go ahead and say, can we make deep fakes that make the worst kinds of
[183:01] Confusion, conspiracy theory, in-group, out-group thinking, propaganda, of course. That is an emerging technology that's eminent. Can we use people's voices and what looks like their video and text for ransom and fucked up stuff? You can think of all the bad actor applications and then you can pretty much apply it to... This is a piece of theory I wanted to say. Every technology
[183:30] has certain affordances. When you build it, it can do things where without that technology, you couldn't do those things, right? A tractor allows me to do things that I couldn't do without a tractor, just the shell in terms of volume and types of work and various things. Every technology is also combinatorial with other tech because what a hammer can do if I
[183:57] And obviously it requires the blacksmithing to make the hammer, right? So you have, you don't just have individual tech, you have tech ecosystems. And the combinatorial potential of these pieces of tech together have different affordances, right? So, but then what do we use it for is based on the motivational landscape.
[184:18] I can use something like a hammer to be Jimmy Carter and build houses for the homeless with Habitat for Humanity, or I can use it as a weapon and kill people with it. And so the tech has the affordances to do both of those. So the tech will be developed and utilized based on motivational landscapes. Sure. And just briefly, and going back to earlier, it's not just dual, because that would be double-edged, it's multipolar. Omni, yes. Okay.
[184:44] So what we can say is the tech will end up potentially getting utilized by all agents for all motives that that tech offers affordances relevant to their motives. And so when we're building a piece of tech, we don't want to think about what is our motive to use it. We want to think about
[185:06] Are we making a new capacity that didn't exist before, lowering the barrier of entry to a particular kind of capacity, where now what are all the motives that all agents have who have access to that technology and what is the world that results from everybody utilizing it that way? That's factoring second, third, fourth order thinking into the development of something, a new capacity that changes the landscape of the world.
[185:31] I would say that every scientist who is working on synthetic bio for curing cancer or AI for solving some awesome problem, every scientist and engineer and et cetera has an entrepreneur and ethical responsibility to think about the new capability they're bringing into the world that didn't exist, not just how they want to use it, but the totality of use that will happen by them having brought it into the world that wouldn't had they not.
[186:01] There is no current, when I say there's an ethical responsibility, there is no legal responsibility. There is no fiduciary responsibility where you are liable for the harms that get produced by a thing that you bring about that someone else reverse engineers and uses a different way. But there is financial incentive and Nobel prizes for developing the thing for your purpose. And then again, socializing the losses of whatever anybody else does with it. So this is one of those cases where the personal near term narrow motive
[186:31] This is us being fucking narrow AIs in an ethical sense, is to do the thing even if the net result of the thing ends up being catastrophically harmful, right? So the incentive deterrent, the motivational landscape is messed up. So every tech, now I want to make a couple more philosophy of tech arguments. Tech is not values neutral, meaning that the hammer is not good or bad. It's just a hammer and whether you use it to build a house for the homeless or
[187:00] unbeat someone's head is up to you that the motivational landscape and the tech have nothing to do with each other. This is not true. If a technology gives the capacity to do something that provides advantage, relative advantage in a competitive environment, whether it's one nation state competing with another nation state or one corporation or one tribe with another one. If it provides significant competitive advantage, if you use it a particular way,
[187:30] then anyone using it that way creates a multipolar trap that obligates the others to use it that way or a related way. And so we end up getting a couple of things, right? This is the classic example I've used a lot is if we think about, and it's because there's been so much analysis on this example. If you think about the plow as a technology,
[187:56] that was one of the key technologies that moved us from sub Dunbar number, hunter gatherer, maybe horticultural subsistence cultures to large agricultural civilizations. The plow is not a neutral technology where you can choose to use it or not choose to use it. The populations that used it
[188:17] made it through famines and grew their populations way faster than the ones who didn't and they use their much, much larger populations to win at wars, right? So the meme set that goes along with using it ends up making it through evolution. The meme set that doesn't, doesn't make it through evolution. And correspondingly, there are, in order to implement the tech,
[188:38] It has ethical consequences. I had to clear cut in order to do the kind of row cropping that the plow really makes possible. I have to get an open piece of land that is now being used for just human food production and not any other purpose. So I'm going to clear cut the forest or a meadow or something to be able to do that.
[189:02] I'm already starting the Anthropocene with that, right? Changing natural environments from whatever habitat and home they were for all the life that was there to now serving the purpose of optimizing human productivity. And I have to yoke an ox and I probably have to castrate it and do a bunch of other things to be able to make that work and probably beat the ox all day to keep pulling the plow. In order to do that, I have to move from the animism of
[189:33] I respect the great spirit of the buffalo and we kill this one with reverence knowing that as it nourishes our bodies, our bodies will be put in the dirt and make grass that its ancestors will eat and part of the great circle of life and whatever kind of idea like that too, it's just a dumb animal. It's put here for human purposes, man's dominion over, it doesn't have feelings like us, that kind of thing.
[189:53] which then spills over to, it's just, it's not like us. So we remove our empathy from it and we apply that to other races, other classes, other species, other whatever, right? So something like the plow is not values neutral. To be able to utilize it, I have to rationalize its use, realizing it creates certain externalities. And if I see those externalities, I have to have a value system that goes along with that.
[190:19] Wait, sorry, to be particular with the word choice, it's not that the plow is not value neutral, it's the use of the plow. Exactly, exactly. And that the plow doesn't use itself, right? And so the use of the plow is not value neutral. Now, a life where I am hunting and gathering versus a life where I'm plowing are also totally different lives. So it codes a completely different behavior set.
[190:44] In doing that, it makes completely new mythos, which is why the hunter-gatherer mythos and the agrarian mythos are completely different. They have different views towards men and women and towards sky gods versus animism and towards all kinds of things.
[190:58] And so, but the other thing is that it provides so much game-theoretic advantage of those who use it relative to those who don't, that when they hit competitive situations, that's why there are not many hunter-gatherers left and why the whole society went to agricultural. So it's not just that the tech, so the tech code, the tech requires people using it, which changes the patterns of human behavior. Changing the patterns of human behavior automatically changes the patterns of perception and human psyche, metaphors, cultures, et cetera.
[191:28] and the externalities that it creates and the benefits that it causes become implicit to the value system because the value system can't be totally incommensurate with the power system, right? And so the dominant narrative ends up becoming support for, one could argue, apologism for the dominant power system. And because we can't feel totally bad about how we meet our needs, so we have to have a value system that deals with the problems of how we do so.
[191:56] But then it's also that the tech that does this becomes obligate because when anyone is using it, everyone else has to, or they kind of lose by default. So when you recognize that tech affects the technology, when utilized, affects patterns of human behavior, humans now do the thing they do with the tech. So people do this thing and they didn't used to do this thing on the cell phone or whatever.
[192:22] To get the benefits of the tech, you have a totally different pattern of human behavior. As a result, you have different nature of mind. You have different value systems that emerge and it becomes obligate or some version of a compensatory tech becomes obligate because the Amish are not really shaping the world. They're no longer engaged in the great power competition.
[192:41] I have a bone to pick there. I watched a few months ago and I don't know anything about the Amish or didn't know anything about the Amish and I'm just someone who grew up in this city and so I dismissed them as Luddites, like we've used that term several times and they're backward, they don't know what they're talking about. And then I watched a video, the Amish aren't idiots, they're not asinine. There's a reason why they do what they do and they either explicitly or intuitively understand that the technology changes the social dynamics in the way that they view the world and
[193:09] Totally and has ethical considerations. So but that influenced me that influenced perhaps millions of people because it's a video. I think it has a few million hits even if they're local just them saying, you know what? I don't care. I'm going to continue to act right. It can still influence outward anyway, and we're talking about it now. Maybe this will influence people and hopefully to something positive and I hopefully to myself something positive. Yeah. Okay, so it's not that
[193:37] You come to this a few times, which is even if you have a memeplex that doesn't become part of the dominant system, can it infect or influence the memeplex in a way that steers it? Yes. But one does not want to be naive about how much influence that's going to have. They want to be thoughtful about exactly how that'll work and what kinds of influences. As we mentioned, not all of Buddhism got picked up everywhere, right? Like the parts that had to do with
[194:04] Why people should take vows of poverty and live on very little. That didn't really get picked up. The parts on how to reduce stress got picked up. The parts on what a healthy motive is didn't get picked up as much as the parts on how to empower your motive through a more functional mind. So it's important to get that the memes might live in a complex, in a context when they influence parts of them are going to interact with another memeplex and the technoplex and everything else. And so
[194:34] You are right to say that it's not that they have no influence, but obviously the omission, not speaking to that they're dumb and backwards, but that in their don't want to engage tech for these reasons argument, they don't have a significant say in whether we engage a particular nuclear war or not. Or they were not the ones that overfished the ocean caused species extinction, but they also couldn't stop it.
[195:00] They are not the ones that are creating synthetic biology that can make totally new species. And this is why I say there is a naive techno-optimism that focuses on the upsides and doesn't focus on all the nth order effects and downsides.
[195:14] And as we were just mentioning, the externalities of tech are not just physical, right? You do this mining to get the thing you want, but there's a lot of mining pollution or the herbicide does make farming easier in this way, but it harms the environment and human health in this other way. That's physical externalities, but you also get these psychosocial externalities.
[195:31] Use facebook for this purpose and it ends up eroding democracy and doubling down on bias and increasing addiction and body dysmorphia and things like that right so the tech effects not it doesn't it has effects that are not the ones you intended some of which might be positive you can have a positive externality but it might have a lot of negative externalities and those
[195:49] Negative externalities can cascade second, third, fourth order effects. So there's a naive techno-optimism that doesn't pay enough attention to that. There's a naive techno-pessimism that says, well, I'm aware of those negative externalities. I don't want those for our people. We think we can isolate our people from everybody else and say, we're going to not do that. But we're going to have decreased influence over what everyone who is doing that has.
[196:15] right, which is what then some of the AI labs argue is there's an arms race, we can't stop the arms race on it. And so only being at the front of the arms race can we steer it. I would argue that that is a naive version of that particular thing, but nonetheless. So if we want to, you know, in one way of reading one of the problematic lessons of the elves in Tolkien,
[196:43] Is and I'm just making this as like a toy model is in some ways they figured out how to have a nicer life than the humans and dwarves and whatever else they were able to do radical life extension and figure out great GDP per capita where the poorest people were doing well.
[197:01] And they were so kind of, but they became insular because they were so disenchanted with the world of men and elves and whatever that they're like, fuck it, we're just going to kind of isolate and do our own thing our own way. But it ends up being that you're still all sharing Middle Earth.
[197:15] and the problem somewhere else can cascade into catastrophic problems that end up messing up your world too. So you can't be isolationist forever in an interconnected world. So they actually had to, they were kind of obligated if we rewrote the story to take whatever they had learned and try to help everybody else have it or have enough of it that you didn't get work dominance and stuff like that. So basically arguing that a isolationist, we see a problem, we're going to avoid that for ourselves.
[197:45] doesn't work with planetary problems. And so I'm not interested in the naive techno negative versions that say because we see a problem with tech we're not going to do it, but we're also going to kind of load a seat in the process and not engage with the fact that we actually care about what happens for the world overall and we have to engage with how the world as a whole is doing that thing. Makes sense what I mean by the naive techno pessimism. And it is that
[198:15] You do not get to do effective isolationism on an interconnected planet that is hitting planetary boundaries with exponential tech.
[198:23] Yeah, I guess what I'm trying to express is that even the Amish, with what they're doing, it's not as simple as the memeplex that's exported by the Amish is the Amish memeplex. There's something else that even influenced them, even yourself. You may be in a position that saves Earth, at least for now, from a hugely catastrophic event. Same with Jodkowski and same with some others. But what influenced you? There's some good in you, hopefully, that was influenced by something else, by something else that's good, which also influenced the Amish. Each person is corrupt in some way.
[198:52] So I'm saying that there's something that's like the unity of virtues that influences us and that as long as we go back and we think or constantly we're assessing ourselves and saying is what I'm doing good then these other meme plexes that are being thrown to us and yes it's in a different context somehow it can orient and pick up the good. We're completely on the same page which is that that happens does not always happen and that that is a important thing to have happen.
[199:22] But if that happened adequately or at the, yeah, I will say adequately, then we wouldn't have extinct all the species that we have. We would not have turned as many old growth forests into deserts. We would not have had as many genocides and unnecessary wars and et cetera. So seeing the failure and where either someone's definition of good is too narrow
[199:49] get our God to win and fuck everybody else or grow GDP and that'll take care of everything. We can well-intendedly pursue a definition of good that's too narrow and externalize harm unintentionally. We can pursue a definition of good that we really believe in, that other people don't believe in, and our answer is to win over them, but it creates an arms race where now we're in competition over the thing. Or where there are people who are really not pursuing the good of all, even
[200:14] They're not even trying to, right? Whether it's sociopathy from a head injury or genes or trauma or whatever it is, they are pursuing a different thing, right? But they're good at acquiring power. And this is actually a very important thing is that the psychologies that want power and are good at getting it,
[200:41] And the psychologies that would be the best stewards of power for the well-being of all are not the same set of psychological attributes. They're pretty close to inversely correlated. So those types of things have to be calculated in this. So what you're saying right now, which is great you're saying it, is that there is some odd impulse that is not only an is impulse, right? That you're calling a universal virtue or good or something.
[201:09] and that you're saying that some people feel very called by that and that that's important. I agree completely. Now, why is that historically and currently not a strong enough binding is the important question. Why has that not been a strong enough binding for all the species that are extinct and all the animals and factory farms and all the disruption, et cetera?
[201:39] And then what would it take for it to become a strong enough binding or the nature of the question here, right? That's actually at the heart of the metacrisis question is to have like, what is a system of ought that is actually commensurable with the system of is and what is a way of having that actually sufficiently influenced behavior such that the catastrophic behaviors don't occur.
[202:08] and that the nature of the influence is not so top-down that it becomes dystopic and that's something like is there either a so one way of thinking about this is I've mentioned the term a couple times superstructure social structure infrastructure that comes from Marvin Harris's work on cultural materialism basically saying every civilization
[202:32] You can think of it in those ways. What is its kind of memeplex? What is its social coordination strategies? And what is the physical tooling set upon? It depends. And different social theorists will say which of these they think is most fundamental. That the value systems are ultimately what steer behavior and determine the types of tech we build and the types of societies. Religious thinkers think there, right? Enlightenment thinkers think there. The social system actually, whatever you incentivize financially is what's going to win.
[203:01] because whether it's good or not, the people who do that get the money, can incentivize more people, create the law, etc. So ultimately, the most powerful thing is the social coordination systems. And then the other schools of thought say, no, actually, the thing that changes in time the most is the tech, and the tech influences the patterns of human behavior, values, everything else. And that's actually what Marvin Harris roughly was saying, was that
[203:28] the change in the techplex ends up being the most influential thing to the change, because it does affect worldviews and it does affect social systems. In the way we already mentioned that the change of the techplex of the printing press affected both worldviews and social systems, so did the plow, so did the internet, so it was about to be AI. I would argue that these three are interacting with each other in complex ways. They all inter-inform each other, and what we have to think about is
[203:59] What changes in all three of them simultaneously factoring all the feedback loops produce a virtuous cycle that orients in a direction that isn't catastrophes or dystopias is the right way of thinking about it. And ultimately, the direction actually has to be the superstructure informing the social structure, informing or guide, bind, direct the infrastructure. Sorry, can you repeat that once more?
[204:29] Yeah, the right now, especially post industrial revolution, physical technology infrastructure had way faster feedback loops on it than the others did. Right. And because of that, it started breaking the previous like industrial era tech.
[204:50] at massive scales with those externalities, whatever can't be managed by agrarian era or hunter gatherer era worldviews and political systems, right? So we ended up getting a whole new set of political systems, both nation democratic, liberal democracies and capitalism and social communism emerging as writing the industrial revolution and what should be the political economy that governs that thing, right?
[205:19] but the feedback loops from tech and specifically whether it's a nation state caught in multipolar traps that's building the tech in a central government communist type place or a market building it but that has perverse incentives built into what its incentive structure is that has influence on our social structures and our cultures superstructures that we could say that
[205:47] way of thinking about the driver of the Meta Crisis. Now, the other direction, if we are to say, no, these examples of the tech won't be built, or we're not going to use the tech in these ways, right? We're not going, yes, you can use a tech that extracts some parts of rocks from other parts of rocks that gives us metal we want, but also gives a lot of waste. No, you can't put all that waste in the waterway, right?
[206:13] or you can't put that pollution there or you can't cut all the trees down in that area because we're calling it a national park, right? That law or regulation that is not just the tech thing, that's the social layer. So that layer has to bind the tech and guide and direct it, say these applications and not these ones. Yeah. Right? Yeah. But if the social system is not an instantiation, if the social structure is not an instantiation of the superstructure, i.e. it's not an instantiation of the will of the people, then it will be oppressive.
[206:43] which is why the idea of democracy emerged out of the idea of the Enlightenment, which was this was a kind of governance that only worked for a comprehensively educated, and you read all the founding documents, that the comprehensive education had to be is and ought. It said you must have a moral education as well as a scientific education, and you must be schooled in the science of governance.
[207:11] and only a people like that, going back to what we said earlier, could check the government, could both know the jurisprudence to instantiate what is good law, could engage in dialectics to listen to other people's point of view to come up with democratic answers. So it was the idea that there was a kind of superstructure possibility, which was some kind of enlightenment or era values that could make a type of social structure
[207:39] that could both utilize the tech and guide it, but also bind its destructive applications. So when you're saying, isn't there some superstructure, isn't there some sense of good that will make us bind our capacities? I would argue that if the sense of good doesn't emerge from the collective understanding and will of the people, but is instantiated in government because we, the technocrats know the right answer or we, the enlightened know the right answer, that will be oppressive and people are right to be concerned by it.
[208:09] But if the collective understanding is, I want what I want, I don't care what the effects are, fuck those guys over there, or I'm not paying attention to externalities or whatever, then the collective will of the people is too dumb to govern and misguided to govern exponential tech and will self-terminate. So you cannot have a
[208:31] a uneducated, unevolved set of values in a libertarian way, guide exponential tech well. It has to be more considerate. It has to think through end-order effects. But you also can't have a government that says, we are the enlightened ones and we figured it out and we're going to impose it on everyone else without it being oppressive and tyrannical, which means nothing less than a kind of cultural enlightenment is required long-term. So the collective will of the people, now this gets back to the alignment topic,
[209:01] Is the will of the people aligned with itself actually, right? Is what I want factoring the effects of what I want, the end-order effects, which means how other people will respond to that and all that comes from it, is what I want actually even aligned with a viable future, right?
[209:21] And so that is, when we're talking about getting alignment right, alignment with my intention, where my intention is that my country wins at all costs, where then China's like, well, fuck, we're going to do the same thing, or Russia, et cetera, so you get arms races. That intent, or my intent is I want more stuff and keep up with the Joneses and I'm not paying attention to planetary boundaries. Those intents are not aligned with their own fulfillment because the world self-determinates for too long in that process.
[209:52] And so with the power of exponential tech and the cumulative effects of industrial tech, we do have to actually get ought that can bind the power of that is, and it can't be imposed. It does have to be emergent, which does mean something like that sense that you're saying has to become
[210:14] very universal and nurtured, right? And then has to also instantiate itself in reformation of systems of governance. I love what you said. Let's see if I can recapitulate it. There's tech at the bottom, there's a social structure here, and then there's culture. Okay, so these are people up here. There's people in all three. There's people's values up here, values. So values are up here. And then there's the social structure over here. And then there's tech over here.
[210:39] Okay, so currently the tech influences the way our society is structured, which also influences our values and a part of the meta crisis is saying that that's upside down, but it shouldn't just be whatever values that just get imposed onto the social structure onto the values have to somehow come from someplace else and then the mystics have their other they have to be coherent with reality.
[210:59] spiritual people may call this something akin to God and the enlightenment people may say I don't know maybe there's some evolutionary will that comes out and if we just close our eyes and hope for the best somehow that emerges whatever it's called it's not entirely us it's not entirely our conscious selves the conscious self would be the more humanistic the enlightenment way of thinking about it is that we can impose our own values that's Nietzsche had something similar to this so I like this I'm in agreement with it I think we've just been using different terminology
[211:29] You and I both know that an evolution of cultural worldview and values adequate to steward the power of exponential technology in non-catastrophic or non-dystopic ways is happening in some areas, but a regress is also happening in some areas.
[211:59] There is increasing left-right polarization. I thought you were going to say there's a regress happening in demand. For instance, we generally think it has to be Malthusian and the more that we use, the more the demand for that increases. That's obviously removing some of the more scarce objects like art and gold, which their value comes from scarcity.
[212:22] But there is like the largest health trend right now is fasting. It's like we've gotten so much food that we're like, let's just not have any food. And then there's also recycling, like just imagine that we think about recycling at all. So there is some recognition that, hey, look, we're consuming too much, let's cut back. And so it's not purely just an exponential function. It is we take into account the rate of production. Well, so what we can see is that
[212:48] The percentage of the total, let's go ahead and say US, but we could look at UAE or Nigeria or whatever, various places, the percent of the US population that is regularly doing fasting is still a relatively small percentage.
[213:09] And in the same way that, like it is true that when there's a market race to the bottom that we saw in food, which Hostess and McDonald's, you know, et cetera, kind of want, which is how do we make the food more and more combinations of salt, fat and sugar and texture and palatability that maximize kind of stickiness and addiction, which of course, if I have a fiduciary responsibility to shareholder maximization and the key to that is to optimize the lifetime value of a customer, addiction is awesome, right?
[213:39] It's actually not awesome. It's legally obligate because of maximized shareholder returns. So that created a race to the bottom where rather than starvation being the leading cause of death, obesity was the leading cause of health-related death in the West. Well, that bottom of the race to the bottom also creates a race to the top for a different niche. So then Whole Foods becomes the fastest growing supermarket and biohacking and et cetera.
[214:06] So that's true. Did that affect the overall population demographics regarding obesity very significantly? Not really. Did it stop the race to the bottom? No, it just added another little niche race, which also then separated, which created more class system separation. So it's not that those effects don't happen.
[214:27] Are they happening at the scale and speed necessary when we look at catastrophic risks? So, of course, I can also pay more for a post-consumer recycled thing and there is more recycling happening. But there's also more net extraction of raw resources and more net waste and pollution per year than there was the previous year, because the whole system is growing exponentially. So even if the recycling is growing, it's actually not growing enough to even keep up with demand, right?
[214:54] So what I'm saying is now let's come bring that back to the memetic space, which is where I was. There are both evolution of values where people are wanting to think through catastrophic risk, existential risk, planetary well-being of everybody long term. So that's good. But there is also
[215:14] Cultural enlightenment is not impossible, but it's also not a given.
[215:37] The kind of mimetic shift, and this is obviously, I think, a big part of why you do the public education mimetic work that you do, is because of having a sensibility about, is it possible to support the development and evolution of worldviews and people in ways that can propagate and create good?
[216:06] Well, you're saying it much more benevolently. Honestly, it's just selfish that I'm just super, super, super curious about all of these topics. And by luck, some other people care to listen and follow along. And I just get to elucidate myself. It's so fun. It bangs on every cylinder. And some other people seem to like it. I hope that what I'm doing is something positive. I hope that it's not producing more harm. I'm not even considering this is sent over the internet is using up energy and Okay, what you just
[216:36] What you just said takes somewhere that I wanted to go that's very interesting. So we're talking about alignment and is a particular alignment, is a particular say human intention aligned with the collective well-being of everybody or even their own long-term future. At the base of the alignment problems is that we are not aligned with the other parts of our own self, right? So from a kind of Jungian parts conflict point of view,
[217:06] Motivation is complex because there's different parts of us that have different motivations. One way of thinking about psychological health, the parts integration view, is the degree to which those different parts are in good communication with each other and see synergistic strategies to meet their needs that don't require that part of self's motivation harming another part of self, but they're actually doing synergistic stuff. The whole of self pulls in the same direction.
[217:31] If you think of like the parts of self as sled dogs, they can be pulling in opposite directions. You get nowhere. They're all choked themselves. So we can see psychological health and ill health is how conflicted are the parts of ourself versus how synergistic are they? Synergistic does not mean homogenous. Doesn't mean we just have one motive. It means that the various motives find synergistic alignment rather than. Yeah, like our bodies are synergistic. Our heart is not the same as the liver. Exactly. Now in your heart,
[217:57] is not going to optimize itself, it delivers long-term harm. Even though on its own, you could say it has a different incentive, it is part of a interconnected system where that actually doesn't really make sense. But a cancer cell will optimize itself or its both itself, how much sugar it consumes and its reproduction cycles at the expense of things around it. And in doing so, it actually is on a self-terminating curve because it ends up killing the host and then killing itself.
[218:25] And so the cancer that does not want to bind its consumption and regulation aligned with the pattern of the whole ends up actually doing better in the short term, meaning consuming more and producing more. And then there's a maximum number of cancer cells right before the body dies and they're all dead. So there is a, if something is inextricably interconnected with the rest of reality, like the heart and the liver or the various cells.
[218:51] But it forgets that or doesn't understand that and optimizes itself at the expense of the other things. It can be on what looks like a short-term winning path, but that self-terminates. It would be an evolutionary cul-de-sac. And I would argue that the collective action failures of humanity as a whole are pursuing an evolutionary cul-de-sac. And so one way of thinking about this, when we say we aren't even that aligned with ourselves, we think of motive. It's
[219:18] We like to think of leaders. What is Putin doing or what is Biden or she are doing in a particular thing? What is their motive? Or what is the founder of an AI lab motive? Motive will always be that each of the parts of the self has a different motive, right? So typically like some unconscious part of me still wants to
[219:41] get the amount of parental approval that I didn't get and then projecting that on the world through some idea of success or to prove that it's enough or whatever. And some part of me is just directly motivated by money. Some evolutionary part is motivated by maximizing mate selection opportunities. Some part of me genuinely wants to do good. Some part of me wants intellectual congruence, right? And so it's
[220:10] there can absolutely be a burn it all down part, right? And this is why shadow work's important, right? Which is look at and talk to all of these parts and see how to get them to move forward together. This is basically governance at the level of this health. So I don't know if you ever watched, and this might be because we're long, even though there's so much left to discuss, a decent place for us to wrap up on alignment.
[220:40] I would say a number of the theorists who you have referenced on the show would be good references for what I would consider the deepest drivers of the Meta Crisis and also the alignment considerations. If you think of like Ian McGillcrest's work with the Master and the Emissary. The right hemisphere is the Master and the left hemisphere is the Master's Emissary.
[221:04] In his 2009 opus, The Master and His Emissary, Ian McGilchrist discusses the distinct functions of the brain's left and right hemispheres. Generally, there's plenty of pop-science woo around this concept, but then you can dispel by going even further to find the correctness about it. The left hemisphere focus on processes such as formal logic, symbol manipulation, and linear analysis. While the right hemisphere is concerned with context awareness,
[221:32] the appreciation of unique instances and topological understanding. Hey, maybe there's some stone duality between them, but I haven't thought much about this.
[221:45] John Vervecky's work, by the way, explores the primacy of cognitive processes like relevance realization, aiming to bridge the gap between analytic and intuitive thinking. Both McGilchrist and Vervecky emphasize the importance of integrating the strengths of each hemisphere or modes of cognition when attempting to tackle intricate problems such as the risks of ever more powerful AIs.
[222:09] The argument is that AI systems primarily operate using left hemisphere capabilities, like pattern recognition, logical reasoning, and general optimization problems. However, they fail to adequately consider the subtleties of human values and ethical implications, which thus leads
[222:26] To mitigate AI risks and prevent an arms race, incorporating insights from both hemispheres and embracing context awareness is crucial. This requires interdisciplinary collaboration between mathematicians, computer scientists, physicists, philosophers, neuroscientists
[222:43] And by the way, it's something that we're attempting in our humble manner on the Theories of Everything channel. By exploring concepts in complex systems theory and how it applies to our current unprecedented situation, we at least hope to understand the interconnectedness of the factors that play in AI development. For instance, addressing AI risks can involve analyzing multi-agent systems, considering network effects and potential feedback loops.
[223:08] We do not think ourselves into a new way of living. We live ourselves into a new way of thinking.
[223:28] You could say, and I talked to Ian about this, and I said, so would you say that the metacrisis, as I formulated, is the result of getting the master and the emissary wrong, which is kind of getting the principle and agent between those two different aspects of human wrong? And he goes, yes, exactly. That's kind of the whole key. That there is a function that he's calling the master that perceives the kind of unmediated field of interconnected wholeness, multimodally perceives and experiences it.
[223:57] And then there is this other set of networks, capacities or dispositions that perceive parts relative to parts, name them, do symbol grounding and orient more in the domain of symbol and can do relevance realization. What part is relevant to a particular goal I have and salience realization, what things should be relevant to some goal and I should be tracking and information compression, which are largely things that we think of as like human intelligence, which of course AI is taking that
[224:26] emissary part and turning it into a external tool rather than that's the thing that makes tools in us now take that thing and make that as a tool but also unbound by the master function way he would call that it's a very interesting way of thinking about AI alignment and whatever and that the master function that is perceiving the
[224:50] unmediated, ground directly not mediated by simple field of interconnected wholeness, that the other function that can do relevance realization, parts realization, centralization, info compression, basically utility function stuff, that that has to be in service of the field of interconnected wholeness. If not, we'll upregulate some parts at the expense of other ones and the cumulative effect of that on an exponential curve will eventually bring us to the meta crisis and self-terminate.
[225:17] I would say what McGillcrest was saying was expanding on what Bohn said about the implicate order and wholeness.
[225:24] Bohm's theory of wholeness and the implicate order states that there is something like life and mind enfolded in everything. A tremendous number of ways in which one can see enfoldment in the mind. One can see that thoughts enfold, feelings enfold thoughts, because a feeling will give rise to a thought. Thoughts enfold feelings. The thought that the snake is dangerous will enfold the feeling of danger.
[225:47] which will then unfold when you see a snake, right? Bohm was looking at the orientation of a mind that mostly thinks in words, a Western mind, you know, in particular, to break reality into parts and make sure that our word, the symbol that would correspond with the ground there, corresponded with the things that it was supposed to and not the other things, so try to draw rigorous boundaries to, you know, divide everything up, led to us
[226:15] And when Bohm and Krishnamurti did their dialogues, which I don't know if you've watched those, they're some of my favorite dialogues in history,
[226:46] Bone was basically answering, what is the cause of all the problems? What's the cause of the Meta Crisis? He didn't call it that at the time. And he basically said a kind of fragmented or fractured consciousness that sees everything as parts where you can upregulate some parts relative to other ones without thinking about the effect of that on the whole, right?
[227:05] And that obviously comes from Einstein being one of his teachers, where Einstein said it's an optical delusion of consciousness to believe there are separate things. There is in reality one thing we call universe. Regarding the theme of consciousness, it's prudent to give an explication here, as often at least I found that mysteries arise because we're calling different phenomenon by the same term. And this applies to consciousness, which doesn't refer to just one aspect, but rather several that can be delineated. To further differentiate, I spoke to Professor Greg Henrichs on this very topic.
[227:35] I'm attempting to delineate a few concepts, that is, Adjectival Consciousness, Adverbial Consciousness, and Phenomenal Consciousness, which I believe is the same as Pea Consciousness, but if that's not the same, then that's four different concepts. So what are they? Can you give the audience and myself an explanation as to when are some satisfied but not others so that we can delineate?
[227:55] Totally. Yep. Yeah. And actually, adjectival adverbial are going to, when we use P, when John and I certainly use P consciousness, phenomenological consciousness is reflecting on adjectival adverbial consciousness. And John refers to John Vervecky. John Vervecky. Yeah. Because we then did a whole series, Untangling the World Not, to make sure that our systems were in line, both in terms of our definitional systems and our causal explanatory framework. So we did a
[228:24] 10 part series on just the two of us on untangling the world, not of consciousness. And then we did one on the self. Then we did one on well being. And we also did one on development and transformation with Zach Stein. So all of we, our systems, I think, are now completely synced up, at least in relation to the language structures that we have. And so I can tell you what we would mean by adjectival adverbial consciousness, which then sort of is what most people mean by phenomenological consciousness.
[228:53] Okay, so if I understand correctly, one has to do with degrees and then another has to do with a here-ness and a now-ness. Yeah, exactly. So actually there's, I like to, so I would encourage us to say there's, let's define conscious, there are three different kinds of definitions of consciousness, okay, that I think the first definition of consciousness is functional awareness and responsivity.
[229:16] Okay, this is something that shows awareness and can respond with control and at the broadest definition, then even things like bacteria can show a kind of functional awareness and responsivity.
[229:27] That's the behavioral responsiveness. And when you say, hey, is that guy conscious? What you mean is he's not responding at all. He's not showing any functional awareness and responsibility. He's either knocked out or blacked out or asleep. So when you say consciousness in that way, that's functional awareness and responsivity, which you can see from the outside and you see in the way in which the agents operating on the arena because they're showing functional awareness and responsivity.
[229:56] The second meaning of consciousness is subjective conscious experience of being in the world. The first person experience of being and this is where the hard problem of consciousness comes online and that's what most people mean by P consciousness or phenomenological consciousness. It's a subjective experience of being which is only available from the inside out.
[230:19] And then the final one is a self-conscious access, so that now I can know that I have had an experience, retrieve it, and then in its highest form report on it. So self-consciousness then is the capacity to recursively access one's phenomenological thing and an explicit self-consciousness, which is what humans have and other animals generally don't,
[230:45] Is this capacity say, Kurt, I am thinking about your question. I'm experiencing your face and this is my narrative in relation. So that's explicit self-conscious awareness. Just a moment. You said access. Is that the same as access consciousness or is that's different? No, that's net blocks access consciousness, which basically there's the, do you have the experience? And then is there a memory access loop that stores it and then can use it?
[231:10] So if I can gain access to it, that's a sort of access consciousness as relates to that. I want to make sure that I understand this. There's a door behind me. If I go and I access is what I'm accessing qualia. And is it the action of accessing that's called access consciousness, like the manipulation of data or is right. It's the well, it's basically so you have awareness and then you have memory of the awareness that you know that some aspects of it knows that you were aware.
[231:37] So it's like so you can imagine awareness without really like one way of differentiated. It would be sort of we have with a sensory perceptual awareness that lasts, say, three tenths of a second to three seconds. It's like a flash. OK, then you have working memory, which extends it across time and puts it on a loop. That loop is what allows you to then gain access to it and manipulate it. So working memory is can be thought of then as a part as the
[232:03] Uh, the whiteboard that allows continuous access to the flash. So there's a flash and then there's the extension and manipulation of the flash, which you then need access to. Okay. Uh, the basic layers of this, what John and I argue is that out of the body comes what we call valence qualia. Valence qualia basically orients and gives value to and can be thought of as in like pleasure and pain in the body. Okay.
[232:31] and it yokes a sensory state with an affective state and points you in a direction. Yoke means tie together, like to yoke stuff together. So this is the sort of the earliest form of consciousness is probably a kind of valence consciousness. Okay. That basically pulls you, you know, it feels good, feels bad kind of deal. It gets me active, gets me passive, but it's this sort of like this kind of felt sense of the body.
[232:58] The argument from John and I's position is that that probably is the base of your subjective conscious experience or the base of your phenomenological experience. Then what happens, and that would be maybe present in reptiles, fish, maybe down into insects at some level.
[233:19] Then the argument would be in birds and mammals, and maybe lower, we don't really know, but there's good reason to believe in birds and mammals, you get a global workspace. The global workspace is when it extends from the sensory flashes into a workspace where you have access and recursive looping on it.
[233:38] And it's the framing of that is the adverbial consciousness is the framing and extension of that. The here-ness, now-ness and togetherness that indexes the thing pulls it together. That's what John calls adverbial consciousness. Okay. And then it's what's on the screen of that adverbial consciousness is what John calls adjectival consciousness. So it's like, it's the screen of attention that orients and indexes. That's adverbial.
[234:07] First, I came in with three questions and now I have so many more. Okay, this valence, is it purely pain and pleasure or is there something else? Are there third, fourth elements? There's certainly pleasure, pain, active, passive to orient and like and want.
[234:29] Basically, you have what's called the circumplex model of affect, which basically is the core energizing structure of your motivational emotional in its broadest frame is two poles. One is active passive. It's like sort of spend energy or conserve energy.
[234:50] And the other pleasure that is either something that you want or something you like or pain that's something that you don't want or don't like at its basic. So that's the so the valence is what we sort of focused on in relationship to just the ground of it. But there are definitely at least these two poles of active passive and pleasure pain at a minimum.
[235:11] Can you make an analogy with this computer screen right now? So the computer screen is somehow the workspace and then the pixels and the fact that they're bound together is adjectival or the intensity is adverbial or the other way around. Can you just spell out an analogy? Absolutely.
[235:26] Right, so the screening, the framing of the screen, which Bren basically says, okay, this is the frame and the relevance and the here-ness, now-ness of what is going to be brought, that is adverbial. That's what John called adverbial consciousness. And he has a whole argument as to why, especially through what's called the pure consciousness event that's achieved in meditation and several other things, there's a differentiation between what he calls the indexing function of consciousness,
[235:52] Which basically is the framing you bring you index you say that thing without specifying what the thing is. OK, it's the that thing that brings it and then you then discriminate on the properties. That's the diff. That's the different pixel shapes that give rise to a form that give rise to an experience quality. And that's the adjectival quality. And these are both of these are John's terms, but I've incorporated them in my work and I use them.
[236:21] Okay. Another analogy now to abandon the screen. It'd be like if looking is one aspect and then what you're looking at is another, what you're looking at is akin to the qualia in a pure consciousness event. The at may not be there, but you're looking. Exactly. It's the framing. Exactly. Index framing. That's why John May takes off his glasses. Okay. The glasses are much more like the adverbial framing. They pull and organize. Okay.
[236:48] As a looking, okay, the pointing the indexing. In fact, he actually he uses work in cognitive science. Okay, where you can track like if I give you like four different things to track on a screen. Okay, and they're changing colors and changing shapes four different things you can track.
[237:06] five six seven you stop losing ability to track however what you what you lose first is the ability to track the specifics you can tell where something is but you can't tell what it is actually so in other words it's sort of like you're trying to track everything but it changes like from red to blue to green you're much better like i think it's over there
[237:26] It indexes, but I can't tell you whether it's an A, a B, a red or a green, I can't tell you the specificity. So in other words, I'm tracking the entity, okay, that's the index, and that's different than the specifying the nature of the form. And indeed, we have lots of different systems that track the, like, what is the thing versus how is it moving? The how is it moving is more of an index structure. But if we think of this kind of Bohmian wholeness, we could say that the Meta Crisis is a function of
[237:58] missing Bohmian wholeness and doing optimization on parts. And so I can optimize self at the expense of other, but of course that then leads to others figuring out how to do that and needing to for protection. And now arms races of everybody doing that. The whole externality said I can optimize self at the expense of other. I can optimize in-group at the expense of out-group.
[238:23] I can optimize one metric at the expense of other metrics. I can optimize one species at the expense of other species. I can optimize my current at the expense of our future, all the way down to one part of self relative to the other parts of self. So the wholeness of all the parts of self in synergy and all of the people, species, et cetera, and how to consider the whole, how to consider the effects on the whole, maybe
[238:51] That was something that other animals did not have to do. Maybe it was something that even earlier humans didn't have to do because they couldn't affect the whole all that much. When we have the ability to affect the whole this much, this quickly, because of tech, and particularly because of exponentially powerful tech, whatever ways we are either consciously saying this is a part of the whole I don't care about or I'm happy to destroy conflict theory,
[239:16] or this is a part of the whole, I'm just not even factoring. Maybe I don't even know the factor that's in the unknown, unknown set, but I'm still going to affect it by the thing I do. So what is outside of my care or my consideration, right? Conflict theory and mistake theory with exponential tech gets harmed, produces its own counter responses and cascade effects. The net effect of that is termination with this much power. What does it take to steward the power adequately?
[239:46] is to think about the total cascading effect of the choices and all agents doing that and say, how do we coordinate all agents doing that in a way that has the integrity of the whole up-regulated rather than down-regulated? And so I would say, Bohmian wholeness is a good framework for alignment, not alignment of an AI with human intent, but a
[240:14] May I inquire how did you attain such a vast array of knowledge? What's your educational background? What does your routine look like for studying? Is it a daily one where you read a certain type of book and you vary the field week by week? What is the regimen? How did you get the way that you are? I think my
[240:39] learning process probably in some ways similar to yours you said very fascinated and curious and I mean you did it you did something better than I did which is you pick the topics you're most interested in found the top experts in the world got them to basically tutor you for free in terms of like in aristocratic tutoring system you did a pretty awesome thing there
[241:04] There were a few cases where I was fortunate enough to be able to do that. Other times I just had to work with the output of their work. But I think for me it was a combo of just innate curiosity independent of any use application. I think it's natural when you love something to want to understand it more. And so for me the impulse to understand the world is kind of a sacred impulse.
[241:30] but then also the desire to serve the world requires understanding it well enough to know how the fuck to maybe do that. So there is both a very practical and very not practical impulse on learning that happened to fortunately converge. And how is it that you're able to articulate the views that you have? How do you develop them? Do you start writing? Do you do it in conversation with people? Do you say some sentiment you realized, you know what, that was actually pretty great. I didn't even realize I thought that until I had said it. Now let me write it down so I can remember it.
[242:02] You know, I have hypotheses about how people develop the ability to communicate well, but my hypotheses about that and my own process are probably different. I think my own process is I was homeschooled, and I was homeschooled in a way that's maybe a little bit like what people call unschooling now, but I had no curriculum at all. But my parents just
[242:27] They had never studied educational theory. They hadn't studied constructivism and thought that Montessori and Dewey's thoughts on constructivism were right. They just kind of had a sense that if kids' innate interest is facilitated, there's a kind of inborn interest function that will guide them to be who they're supposed to be. So there were some downsides to that, which is
[242:56] Because I had no curriculum, I didn't have writing a letter a bunch of times to get fine motor skills down, so I have illegible handwriting. I know what the shapes look like, but I have illegible handwriting. I spelled phonetically until I became an adult and spell checker taught me how to spell. Interesting. I missed some significant things.
[243:20] I also got a lot earlier, deeper exposure to the things I was really interested in, which were philosophies, spiritual studies across lots of areas, activism across all the areas, and sciences, and poetry. But my education was largely talking with my parents and some of their friends, and it was largely talking
[243:47] I actually didn't, it wasn't until later that I did a lot of reading and writing. So I think it just was very conversation, it was very native, more than in a lot of people's developmental environment. I think that's the answer for me. I could say that for other people I have seen when they start writing and trying to say, what is the most concise and precise way of writing this?
[244:13] Alright, that was quite a slew of information and it's advantageous to go through and let's go over a summary as to what's been discussed so far. You'll get a final word from Daniel in a few minutes. For now, you've watched three hours plus of this. Let's get our bearings
[244:43] We've talked about how the emergence of AI poses a unique risk that can't be regulated by a national agency like the FDA for AI, but instead they require some global regulation. Again, this is all argued by Daniel. These aren't my positions. I'm just summarizing what's occurred so far. The risks associated with AI are not those that are comparable to a single chemical
[245:02] as AIs are dynamic agents they respond differently and unpredictably to stimuli. We've also talked about the multipolar trap which is regarding self-policing and a collective theory of justice such as Singapore's drug policy that was outlined and how this line of thinking can be applied to prevent global catastrophic events caused by coordination failures of self-interested agents. You can go back to that bit on national equilibrium to understand a bit about that as well as the multipolar trap section timestamps in the description. We also referenced
[245:31] a false flag alien invasion and can that unify humanity. A theme throughout has also been how AI has the potential to revolutionize all fields but it also poses risks such as empowering bad actors and the development of unaligned general artificial intelligence.
[245:46] okay so this happened about one week ago or so i debated whether or not i should just record an extra piece now or if i should wait till some next video but given the pace of this and how much content has already been in this single video i thought hey i'll just record it and give you all some more content maybe some people aren't aware of this and i think they should be
[246:05] The godfather of AI leaves Google. This is Jeffrey Hinton. If AI could manipulate or possibly figure out a way to kill humans, how could it kill humans? If it gets to be much smarter than us, it'll be very good at manipulation because it will have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it'll figure out ways of getting around
[246:30] Jeffrey Hinton is someone who resigned from Google approximately one week ago because he believed that AI bots were quite scary. Right now they're not more intelligent than us, as far as he can tell, but he thinks they soon may be. He also said here in some of these quotes that I have that it's hard to see how you can prevent bad actors from using
[246:55] large language models or the upcoming artificial intelligence models for bad things, Dr. Hinton said. After the San Francisco startup, OpenAI released a new version of ChatGPT in March. As companies improve their artificial intelligence systems, Hinton believes that they become increasingly dangerous. Lookout was five years ago and how it is now.
[247:15] He said of AI technology, take the difference and propagate it forward. That's scary. His immediate concern is that the internet will be flooded with false videos and text and the average person will not be able to know what's true any longer. Now he says, and I quote, the idea that this stuff could actually get smarter than people. A few people believe that he said, but most people thought it was way off and I thought it was way off.
[247:40] In fact, I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that. Also, there's this TED talk that's recently been published as well just a few days ago. It seems like less than two weeks ago. Yejin Choi, who's a computer scientist, said this. Extreme scale AI models are so expensive to train and only few tech companies can afford to do so. So we already see the concentration of power.
[248:09] But what's worse for AI safety? We're now at the mercy of those few tech companies because researchers in the larger community do not have the means to truly inspect and dissect these models. Then Chris Anderson comes on and asks about, hey, look, if what we need is some huge change, why are you advocating for it? Because there's a huge change, a large change. It's not like a foot at a time. Every time these AIs are released, this is what her response is.
[248:39] There's a quality of learning that is still not quite there. We don't yet know whether we can fully get there or not just by scaling things up. And then even if we could, do we like this idea of having very, very extreme scale AI models that only a few can create and own? And lastly, there's this video by Sabine Haustenfelder that was released just a few days ago.
[249:09] Many people are concerned about the sudden rise of AIs, and it's not just fear-mongering. No one knows just how close we are to human-like artificial intelligence. Current concerns have focused on privacy and biases, and that's fair enough. But what I'm more worried about is the impact on society, mental well-being, politics and economics.
[249:31] A just released report from Goldman Sachs says that the currently existing AI systems can replace 300 million jobs worldwide and about one in four work tasks in the US and Europe. According to Goldman Sachs, the biggest impacts will be felt in developed economies. Our currently unaligned general intelligence is an issue. Adding artificial in there is like another can of worms, man.
[249:55] The alignment problem isn't just about aligning human intentions with the collective well-being, but also about aligning the different paths of ourselves to work synergistically toward a common goal. This requires a cultural alignment, enlightenment, I think he used the word, though I'm not entirely sure. We also talked about meme complexes that survive past their hosts.
[250:14] and how this is intimately tied up with the notion of the good and just so you know my feelings are that memes are an emphatically mechanical way of looking at a complex phenomenon such as a society an extremely complex phenomenon such as a religion of a society across time and across other societies interacting
[250:29] I don't believe my point was adequately conveyed and if you're interested in hearing more, then let me know in the comments and I'll consider expanding on my thoughts in a future podcast. We also talked about naive techno-optimism and how it often overlooks externalized costs of progress. A responsible techno-optimism requires thinking about how to get more upsides with less downsides, which can't be achieved. Goodheart's law then applies to any metric that's incentivized
[250:53] It leads to perverse forms of fulfilling said metric. Alright, as you can see, so much energy went into this episode, so much thought, so much editing, so much script revision, so much interaction with the interviewee, and double checking if this was accurately representing what was said, and we plan on continuing that for Season 3. More work went into this episode than any
[251:13] Any of the other episodes of the whole history of theories of everything if you'd like to support this Podcast and continue to see more then go to patreon.com Slash Kurt Jai Mungle the link is on the screen right now as well as in the description There's also theories of everything org if you're uncomfortable giving to patreon There's also a direct PayPal link if that's what you're interested in you should also know that as of right now. There's launched a
[251:37] Merch. We've just launched the next Merch. This is the second time that Merch has ever been on the Toe channel. The first one is completely gone. You can't find any of those any longer, but now you can see it on screen. These are references to different Toe episodes like Just Get Wet and Distorture. Thumbs up if you recognize that and you have Toe and it's babbling all the way down. That's from Carl Friston by the way. Don't thrust your toe. Trust your toe.
[252:02] Hey, don't talk to me or I'll bring up Hegel. Many of these are references, like I mentioned, I agree. I agree with how you're agreeing with me. This is what Verveki said to Ian McGilchrist. You have to be a significant fan to understand this reference. And then also, there's Verveki, who's known for saying, there's the being mode, and then there's the having mode. Got Abygenosis? I say face-face, inorganically, in everyday conversation. I have a toe fetish. I'm just a gym rat for toes. That's me, that's what I say, frequently. There's also a purse and a toe hat.
[252:29] some toe socks i think that was one of the most popular of the first round so the toe socks are making a comeback if you want to support the channel and flaunt whatever it is that you feel like you're flaunting then feel free and visit the merch link in the description or you can visit tinyurl.com slash toe merch t-o-e merch m-e-r-c-h just so you know everything every single thing that you're seeing this editing these effects
[252:55] Speaking with the interviewee, all of this is done out of pocket. I pay for the subscription fees. I pay for Zoom. I pay for Adobe. I pay for the editor. I pay personally for travel costs. If there are any, I pay for so much. There's so much that goes into this. Sponsors help, but also your support helps a tremendous, tremendous amount. I wouldn't be able to do this without you.
[253:16] so thank you so much thank you for watching for this long holy moly again if you want to support them you can get some merch if you like and if you want to give directly on a monthly basis to see episodes like this with such hopefully quality hopefully something that's educating that's elucidating to you that's illuminating to you then visit patreon.com or like i mentioned there's a paypal link in the description there's also a crypto link in the description
[253:41] Now I'm also interested in hearing what the other side, the other side, the people who are pro-AI, unfettered AI, who say, hey there's nothing to see here, you are all being hyperbolically hysterical. I'd like to see someone respond to what Daniel has said about AI but also civilizational risks in general and how AI exacerbates those. So if you think of any guests,
[254:00] who would serve as great counterpoints, especially those who are researchers in machine learning, then please suggest them in the comments section. If you're a professor and you're watching and you'd like to have a friendly theolocution, that means a harmonious incongruity, a good-natured debate where the goal isn't to debate but to understand one another's point of view. If you're watching this and you think, hey, I would like to come on to the Theories of Everything channel as a professor along with my other professor friend who believes something that's antithetical to what I believe about AI risk, then
[254:28] please message me you can find my email address i'm sure you can also leave a comment yeah and who knows when the next episode of toe is coming out by the way the next one is going to be john greenwald should be in about a week or a week and a half all right let's get back to this with daniel schmartenberger
[254:43] Well, this is a great place to end. Daniel, you're now speaking directly to the people. Well, you have been this whole time, but even more so now to the people who have been watching and listening. What's something you want to leave them with? What should they do? They're here. They've heard all these issues. They hear Bohmian. They're like, OK, that sounds cool. That's motivating. It's a bit abstract, but it is motivating. OK, what should I do, Daniel? I want the earth to be here in decades from now, centuries from now. What should I do?
[255:13] So I'm going to answer this in a way that I think factors who your audience probably is. I don't know, we even shared demographics with me, but based on the attractor, I can guess. If I was answering just to a series of technologists or investors or bureaucrats, I might say something different.
[255:40] and realizing that amongst that audience, there are people who are going to have radically different skills and capacities and parts of it that they feel the most motivated and oriented to. So I'm obviously not going to say one thing everybody should do. Okay, what I'll say is
[256:10] Whether it's hearing a conversation like this where the planetary boundaries and really thinking about how that there's more biomass of animals and factory farms and there are in the wild left of the total amount of species extinction or the what the risks associated with rapid development of decentralizing synthetic biology and AI are you hear these things you're like fuck and it connects you to
[256:35] what is most important beyond your own narrow life or even the politics that is coming into your stream. Or whether it's when you have a deep meditation or a medicine journey or whatever it is and connect to what is most meaningful. Design your life in a way where that experience happens regularly. So what you are paying attention to and optimizing for on a daily basis is connected to the deepest values you have.
[257:03] Because on a daily basis, the people around you and your job and your newsfeed are probably sharing other things. So try to configure it that the deepest true good and beautiful that you're aware of is continuously in your awareness. So your daily choices of how you spend your time and money is continuously at least informed by that. That's the first thing I would say. I'll say a couple of other things. Aligned with that is
[257:35] look at things that are happening in the world online to have a sense of things that you can't see in front of you, but then also get offline and connect with both the trees in front of you and without any modeling or value system, just how innately beautiful they are, and also the mirror neuron experience when you're with a homeless person.
[258:00] So both have a sense of what's happening at scale, but then also ground an embodied sense, your own care for the real world that is not just on a computer. There's a real world here. And then realize deep in, should actually matters. Independent of whether I can formalize a particular
[258:22] meaning or purpose of the universe argument or formalize a response to solipsistic arguments or nihilistic arguments like prima facie, reality is meaningful. And I actually do care. I wouldn't get sad or upset or inspired if I didn't care about anything. I actually do care.
[258:46] And so life matters and I make choices and I can make choices that affect the world. So my own choices matter. So what choices am I making every moment? And what is the basis that I want to guide them by, right? To just deepen the sense of the meaningfulness of life in your own choice and the seriousness with which you take how you design your life, particularly factoring the timeliness and eminence of the issues that we face currently.
[259:17] And then the last thing I would say is as you could like really work to get more educated about the issues that you care about and are concerned about, really work to get more educated about them, get more connected to the people working on them and really study the views that are counter to the views that naturally appeal to you. So you bias correct so that your own
[259:47] Thank you, Daniel. I appreciate you spending almost four hours now
[260:17] Likewise. We covered a bunch of areas that I did not expect, but they're all good areas. I'm curious how the thing ends up getting edited and makes it through. And I'm also curious with your particularly philosophically interested and insightful audience, what questions and thoughts emerge in this and maybe we'll get to address some of them someday. Yeah, there's definitely going to be a part two.
[260:46] Cool. A much more philosophical part too if this one wasn't already. The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories, and build as a community our own toes. Links to both are in the description.
[261:15] Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well. If you'd like to support more conversations like this, then do consider visiting theoriesofeverything.org. Again, it's support from the sponsors and you that allow me to work on Toe full-time.
[261:41] Thank you.
View Full JSON Data (Word-Level Timestamps)
{
  "source": "transcribe.metaboat.io",
  "workspace_id": "AXs1igz",
  "job_seq": 8389,
  "audio_duration_seconds": 15719,
  "completed_at": "2025-12-01T01:08:01Z",
  "segments": [
    {
      "end_time": 26.203,
      "index": 0,
      "start_time": 0.009,
      "text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
    },
    {
      "end_time": 53.234,
      "index": 1,
      "start_time": 26.203,
      "text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
    },
    {
      "end_time": 64.514,
      "index": 2,
      "start_time": 53.558,
      "text": " As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
    },
    {
      "end_time": 95.725,
      "index": 3,
      "start_time": 66.408,
      "text": " This is a real good story about Bronx and his dad Ryan, real United Airlines customers. We were returning home and one of the flight attendants asked Bronx if he wanted to see the flight deck and meet Kath and Andrew. I got to sit in the driver's seat. I grew up in an aviation family and seeing Bronx kind of reminded me of myself when I was that age. That's Andrew, a real United pilot. These small interactions can shape a kid's future. It felt like I was the captain. Allowing my son to see the flight deck will stick with us forever. That's how good leads the way."
    },
    {
      "end_time": 112.159,
      "index": 4,
      "start_time": 96.254,
      "text": " As of today, we are in a war that has moved the atomic clock closer to midnight than it has ever been. We're dealing with nukes and AI and things like that. We could easily have the last chapter in that book if we are not more careful about confident, wrong ideas."
    },
    {
      "end_time": 139.838,
      "index": 5,
      "start_time": 115.162,
      "text": " This is a different sort of podcast, not only because it's Daniel Schmottenberger, one of the most requested guests, who, by the way, I'll give an introduction to shortly, but also because today marks season three of the Theories of Everything podcast. Each episode will be far more in-depth, more challenging, more engaging, have more energy, more effort, and more thought placed into it than any single one of the previous episodes."
    },
    {
      "end_time": 160.401,
      "index": 6,
      "start_time": 139.838,
      "text": " Welcome to the season premiere of season three of the theories of everything podcast with myself Kurt Jaimungal. This will be a journey of a podcast."
    },
    {
      "end_time": 189.377,
      "index": 7,
      "start_time": 160.674,
      "text": " with several moments of pause, of tutelage, of reflection, of surprise appearances, even personal confessions. This is meant for you to be able to watch and re-watch, or listen and re-listen. As with every TOE podcast, there are timestamps in the description, and you can just scroll through to see the different headings, the chapter marks. I say this phrase frequently in the Theories of Everything podcast. This phrase, just get wet, which comes from Wheeler, and it's about how there are these abstruse concepts in mathematics,"
    },
    {
      "end_time": 205.725,
      "index": 8,
      "start_time": 189.377,
      "text": " and you're mainly supposed to get used to them, rather than attempt to bang your head against a wall to understand it the first time through. It's generally in the rewatching that much of the lessons are acquired and absorbed and understood. While you may be listening to this, so either you're walking around and it's on YouTube or you're"
    },
    {
      "end_time": 231.886,
      "index": 9,
      "start_time": 205.725,
      "text": " I don't know about you, but much or most in fact of the podcasts that I watch, I walk away with this feeling like I've learned something but I actually haven't and the next day if you ask me to recall, I wouldn't be able to recall much of it."
    },
    {
      "end_time": 258.729,
      "index": 10,
      "start_time": 231.886,
      "text": " That means that they're great for being entertaining and feeling like I'm learning something. That is the feelings of productivity. But if I actually want to deep dive into subject matter, it seems to fail at that, at least for myself. Therefore, I'm attempting to solve that by working with the interviewee. For instance, we worked with Daniel to making this episode and any episode that comes out from season three onward from this point onward to make it not only a fantastic podcast, but perhaps"
    },
    {
      "end_time": 287.039,
      "index": 11,
      "start_time": 258.729,
      "text": " in this small, humble way to evolve what a podcast could be. You may not know this, but in addition to math and physics, my background is in filmmaking, so I know how powerful certain techniques can be with regards to elucidation. How the difference between making a cut here or making a cut here can be the difference between you absorbing a lesson or it being forgotten. By the way, my name is Kurt Jaimungal, and this is a podcast called Theories of Everything, dedicated to investigating the versa-colored terrain of theories of everything"
    },
    {
      "end_time": 314.872,
      "index": 12,
      "start_time": 287.039,
      "text": " primarily from a theoretical physics perspective, but also venturing beyond that to hopefully understand what the heck is fundamental reality. Get closer to it. Can you do so? Is there a fundamental reality? Is it fundamental? Because even the word fundamental has certain presumptions in it. I'm going to use almost everything from my filmmaking background and my mathematical background to make Toe the deepest dive, not only with the guest, but we'd like it to be the deepest dive on the subject matter that the guest is speaking about."
    },
    {
      "end_time": 332.142,
      "index": 13,
      "start_time": 314.872,
      "text": " It's so supplementary that it's best to call it complementary, as the aim is to achieve so much that there's no fat, there's just meat. It's all substantive, that's the goal. Now there's some necessary infrastructure of concepts to be explicated prior in order to gain the most from this conversation with Daniel, so I'll attempt to outline when needed"
    },
    {
      "end_time": 347.261,
      "index": 14,
      "start_time": 332.142,
      "text": " Again, timestamps are in the description so you can go at your own pace, you can revisit sections. There will also be announcements throughout and especially at the end of this video, so stay tuned. Now, Daniel Schmottenberger is a systems thinker, which is different than reductionism primarily in its focus."
    },
    {
      "end_time": 367.159,
      "index": 15,
      "start_time": 347.261,
      "text": " So systems thinkers think about the interactions, the N2 or greater interactions, the second order or third order. And Daniel in this conversation is constantly referring to the interconnectivity of systems and the potential for unintended consequences. We also talk about the risks associated with AI. We also talk about their boons because that's often overlooked."
    },
    {
      "end_time": 384.906,
      "index": 16,
      "start_time": 367.159,
      "text": " plenty of alarmist talk is on this subject. When talking about the risks, we're mainly talking about its alignment or misalignment with human values. We also talk about why each route, even if it's aligned, isn't exactly salutary. About a third of the way through, Daniel begins to advocate for a cooperative orientation in AI development."
    },
    {
      "end_time": 406.954,
      "index": 17,
      "start_time": 384.906,
      "text": " where the focus is on ensuring that AI systems are designed to benefit and that there are safeguards placed in, much like any other technology. You can think about this in terms of a tweet, a recent tweet by Rob Miles, which says, it's not that hard to go to the moon, but in worlds that manage it, saying that these astronauts will probably die, is responded with a detailed technical plan showing all the fail-safes, testings, and procedures that are in place."
    },
    {
      "end_time": 415.299,
      "index": 18,
      "start_time": 406.954,
      "text": " They're not met with, hey, wow, what an extraordinarily speculative claim. Now, this cooperative orientation resonates with the concept of Nash equilibrium."
    },
    {
      "end_time": 444.957,
      "index": 19,
      "start_time": 417.005,
      "text": " A Nash equilibrium occurs when all players choose their optimal strategy given their beliefs about other people's strategies such that no one player can benefit from altering their strategy. Now that was fairly abstract. So let me give an instance. There's rock, paper, scissors, and you may think, Hey, how the heck can you choose an optimal strategy in this random game? Well, that's the answer. It's actually to be random. So a one third chance of being rock or paper or scissors."
    },
    {
      "end_time": 456.169,
      "index": 20,
      "start_time": 444.957,
      "text": " And you can see this because if you were to choose, let's say, one half chance of being rock, well then a player can beat you one half of the time by choosing their strategy to be paper, and then that means that you can improve your strategy by choosing something else."
    },
    {
      "end_time": 476.63,
      "index": 21,
      "start_time": 456.852,
      "text": " In game theory, a move is something that you do at a particular point in the game or it's a decision that you make. For instance, in this game, you can reveal a card, you can draw a card, you can relocate a chip from one place to another. Moves are the building blocks of games and each player makes a move individually in response to what you do or what you don't do or"
    },
    {
      "end_time": 506.613,
      "index": 22,
      "start_time": 476.63,
      "text": " in response to something that they're thinking, a strategy, for instance. A strategy is a complete plan of action that you employ throughout the game. A strategy is your response to all possible situations, all situations that can be thrown your way. And by the way, that's what this upside down funny looking symbol is. This means for all in math and in logic. It's a comprehensive guide that dictates the actions you take in response to the players you cooperate with and also the players that you don't."
    },
    {
      "end_time": 526.237,
      "index": 23,
      "start_time": 507.466,
      "text": " A common misconception about Nash Equilibria is that they result in the best possible outcome for all players. Actually, most often they're suboptimal for each player. They also have social inefficiencies. For instance, the infamous prisoner's dilemma. Now this relates to AI systems, and as Daniel talks about, this has significant implications for AI."
    },
    {
      "end_time": 551.425,
      "index": 24,
      "start_time": 526.237,
      "text": " Do we know if AI systems will adopt cooperative or uncooperative strategies? How desirable or undesirable will those outcomes be? What about the nation states that possess them? Will it be ordered and positive or will it be chaotic and ataxic like the intersection behind me? Although it's fairly ordered right now, it's usually not like this. A stability of a Nash equilibrium refers to its robustness in face of small changes, perturbations in payoffs or strategies."
    },
    {
      "end_time": 580.725,
      "index": 25,
      "start_time": 551.425,
      "text": " An unstable Nash equilibrium can collapse under slight perturbations, leading to shifts in player strategies and then consequently a new Nash equilibrium. In the case of AI risk, an unstable Nash equilibrium could result in rapid and extreme harmful oscillations in AI behavior as they compete for dominance. And by the way, this isn't including that an AI itself may be fractionated in the way that we are as people with several selves inside us vying for control in a Jungian manner."
    },
    {
      "end_time": 604.206,
      "index": 26,
      "start_time": 582.927,
      "text": " Generalizations also have a huge role in understanding complex systems. So what occurs is you take some concept, and then you list out some conditions, and then you relax some of those conditions. You abstract away. Through the recognition of certain recurring patterns, we can construct frameworks, we can hypothesize, such that hopefully it captures not only this phenomenon, but a diverse array of phenomenon."
    },
    {
      "end_time": 630.657,
      "index": 27,
      "start_time": 604.206,
      "text": " The themes of theories of everything of this channel is what is fundamental reality? And like I mentioned, we generally explore that from a theoretical physics perspective, but we also abstract out and think, well, what is consciousness? Does that arise from material? Does it have a relationship to what's fundamental reality? What about philosophy? What does that have to say metaphysics? So that is generalizations empower prognostication, the discerning of patterns, and they streamline our examination of the environment that we seem to be embedded in."
    },
    {
      "end_time": 654.377,
      "index": 28,
      "start_time": 630.657,
      "text": " Now, in the realm of quantum mechanics, generalizations take on a specific significance. Now, given that we talk about probability and uncertainty, both in these videos, which you're seeing on screen now, and in this conversation with Daniel, thus it's fruitful to explore one powerful generalization of probabilities that bridges classical mechanics with quantum theory called quasi-probability distributions."
    },
    {
      "end_time": 680.23,
      "index": 29,
      "start_time": 660.913,
      "text": " Born in the early days of quantum mechanics, a quasi-probability distribution, also known as a QPD, bridges between classical and quantum theories, there's this guy named Eugene Wigner, who around 1932 published his paper on the quantum corrections of thermodynamic equilibriums, which introduces the Wigner function."
    },
    {
      "end_time": 701.63,
      "index": 30,
      "start_time": 680.401,
      "text": " What's notable here is that both position and momentum appear in this analog to the wave function when ordinarily you choose to work in the so-called momentum space or position space, but not both. To better grasp the concept, think of quasi-probability distributions as maps that encode quantum features into classical-like probability distributions."
    },
    {
      "end_time": 722.79,
      "index": 31,
      "start_time": 701.63,
      "text": " Whenever you hear the suffix like, you should immediately be skeptical as space-like isn't space and time-like isn't the same thing as time. In this instance, classical-like isn't classical. There's something called the Kalmagurov axioms of probability, and some of them are relaxed in these quasi-probability distributions. For instance, you're allowed negative."
    },
    {
      "end_time": 737.944,
      "index": 32,
      "start_time": 722.79,
      "text": " The development of QPDs expanded with the Glauber-Sadarsian p-representation."
    },
    {
      "end_time": 764.565,
      "index": 33,
      "start_time": 737.944,
      "text": " Introduced by Sudarshan in 1963, and refined by Glauber and Husami's Q representation in 1940. QPDs play a crucial role in quantum tomography, which allow us to reconstruct and characterize unknown quantum states. They also maintain their invariance under symplectic transformations preserving the structure of phase space dynamics. You can think of this as preserving the areas of parallelograms formed by vectors in phase space."
    },
    {
      "end_time": 786.254,
      "index": 34,
      "start_time": 764.565,
      "text": " Nowadays, QPDs have ventured beyond the quantum realm, inspiring advancements in machine learning and artificial intelligence. This is called quantum machine learning, and while it's in its infancy, it may be that the next breakthrough in lowering compute lies with these kernel methods and quantum variational encoders. By leveraging QPDs in place of density matrices, researchers"
    },
    {
      "end_time": 804.957,
      "index": 35,
      "start_time": 786.254,
      "text": " researchers gain the ability to study quantum processes with reduced computational complexity. For instance, QPDs have been employed to create quantum-inspired optimization algorithms like the quantum-inspired genetic algorithm, QGA, which incorporates quantum superposition to enhance search and optimization processes."
    },
    {
      "end_time": 827.892,
      "index": 36,
      "start_time": 804.957,
      "text": " Quantum variational autoencoders can be used for tasks such as quantum states compression and quantum generative models, also quantum error mitigation. The whole point of this is that there are new techniques being developed daily, and unlike the incremental change of the past, there's a probability, a low one but it's non-zero, that one of these will remarkably and irrevocably change the landscape of technology."
    },
    {
      "end_time": 845.213,
      "index": 37,
      "start_time": 829.77,
      "text": " So generalizations are important. For instance, spin and GR. So general relativity is known to be the only theory that's consistent with being Lorentz invariance, having an interaction and being spin 2, something called spin 2. This means if you have a field and it's spin 2 and it's not free, so there's interactions,"
    },
    {
      "end_time": 869.36,
      "index": 38,
      "start_time": 845.213,
      "text": " and its Lorentz invariant, then general relativity pops out, meaning you get it as a result. Now this interacting aspect is important because if you have a scalar, so if you have a spin zero field, then what happens is it couples to the trace of the energy momentum tensor, because there's nothing else for it to couple to, and it turns out that does reproduce Newton's law of gravity. However, as soon as you add an interacting relativistic matter, then you don't get that light bends."
    },
    {
      "end_time": 884.377,
      "index": 39,
      "start_time": 869.36,
      "text": " So then you think, well, let's generalize it to spin 1, and then there are some problems there, and you think, well, let's generalize it to spin 3 and above, and there's some no-go theorems by Weinberg there. By the way, the problem with spin 1 is that masses will repel for the same reason that in electromagnetism, that if you have same charges, they repel."
    },
    {
      "end_time": 902.944,
      "index": 40,
      "start_time": 884.377,
      "text": " Okay, other than just a handful of papers, it seems like we've covered all the necessary ground. And when there's more room to be covered, I'll cover it spasmodically throughout the podcast. There'll be links to the papers and to the other concepts that are explored in the description. Most of the prep work for this conversation seems to be out of the way. So now let's introduce Daniel Schmottenberger."
    },
    {
      "end_time": 919.48,
      "index": 41,
      "start_time": 904.002,
      "text": " Welcome, valued listeners and watchers. Today we're honored to introduce this remarkable guest, an extraordinary, extraordinary thinker who transcends conventional boundaries, Daniel Schmottenberger. What are the underlying causes that everything from..."
    },
    {
      "end_time": 946.254,
      "index": 42,
      "start_time": 919.48,
      "text": " As a multidisciplinary aficionado, Daniel's expertise spans complex systems theory, evolutionary dynamics, and existential risk. Topics that challenge the forefront of academic exploration. Seamlessly melding different fields such as philosophy, neuroscience, and sustainability. He offers a comprehensive understanding of our world's most pressing challenges."
    },
    {
      "end_time": 967.261,
      "index": 43,
      "start_time": 947.517,
      "text": " Really, the thing we have to shift is the economy because perverse economic incentive is under the whole thing. There's no way that as long as you have a for-profit military industrial complex as the largest block of the global economy that you could ever have peace, there's an anti-incentive on it as long as there's so much money to be made with mining, et cetera, like we have to fix the nature of economic incentives."
    },
    {
      "end_time": 990.145,
      "index": 44,
      "start_time": 967.261,
      "text": " In 2018, Daniel co-founded the Consilience Project, a groundbreaking initiative that aims to foster societal-wide transformation via the synthesis of disparate domains promoting collaboration, innovation, as well as something we used to call wisdom. Today's conversation delves into AI, consciousness, and morality aligning with the themes of the Toe podcast."
    },
    {
      "end_time": 1004.872,
      "index": 45,
      "start_time": 990.145,
      "text": " It may challenge your beliefs. It'll present alternative perspectives to the AI risk scenarios by also outlining the positive cases which are often overlooked. Ultimately, Daniel offers a fresh outlook on the interconnectedness of reality."
    },
    {
      "end_time": 1023.763,
      "index": 46,
      "start_time": 1013.251,
      "text": " So, you toe watchers you, my name is Kurt Jaimungal. Prepare for a captivating journey as we explore the peerless, enthralling world of Daniel Schmottenberger."
    },
    {
      "end_time": 1052.688,
      "index": 47,
      "start_time": 1024.121,
      "text": " Enjoy. A KFC tale in the pursuit of flavor. The holidays were tricky for the Colonel. He loved people, but he also loved peace and quiet. So he cooked up KFC's 499 Chicken Pot Pie. Warm, flaky, with savory sauce and vegetables. It's a tender chicken-filled excuse to get some time to yourself and step away from decking the halls. Whatever that means. The Colonel lived so we could chicken. KFC's Chicken Pot Pie. The best 499 you'll spend this season."
    },
    {
      "end_time": 1080.196,
      "index": 48,
      "start_time": 1052.688,
      "text": " I do not know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones. Alright, Daniel."
    },
    {
      "end_time": 1110.862,
      "index": 49,
      "start_time": 1080.879,
      "text": " What have you been up to in the past few years? The past few years, trying to understand the unfolding global situation and the trajectories towards existential and global catastrophic risk in particular."
    },
    {
      "end_time": 1140.486,
      "index": 50,
      "start_time": 1111.561,
      "text": " solutions to those that involve control mechanisms that create trajectories towards dystopias and the consideration of what a world that is neither in the attractor basin of catastrophe or dystopia looks like, a kind of third attractor, what would it take to have a civilization that could steward the power of exponential technology much better than we have stewarded all of our previous technological power"
    },
    {
      "end_time": 1168.558,
      "index": 51,
      "start_time": 1141.067,
      "text": " What would that mean in terms of culture and in terms of political economies and governance and things like that? So thinking about those things and acting on specific cases of near term catastrophic risks that we were hoping to ameliorate and helping with various projects on how to transition institutions to be more intelligent and things like that. What are some of these near term catastrophic risks?"
    },
    {
      "end_time": 1201.374,
      "index": 52,
      "start_time": 1172.602,
      "text": " Well, as of today, we are in a war that has moved the atomic clock closer to midnight than it has ever been. And that's a pretty obvious one. And if we were to write a book about the folly of the history of human hubris,"
    },
    {
      "end_time": 1231.732,
      "index": 53,
      "start_time": 1202.5,
      "text": " we would get very concerned about where we are confident about where we're right, where we might actually be wrong and the consequences of it. And as we're dealing with nukes and AI and things like that, we could easily have the last chapter in that book if we are not more careful about confident wrong ideas. So what are all the assumptions in the way we're navigating that particular conflict that might not be right?"
    },
    {
      "end_time": 1258.524,
      "index": 54,
      "start_time": 1232.432,
      "text": " What are the ways we are modeling the various sides and what would an end state that is viable for the world and that just at minimum doesn't go to a global catastrophic risk? That's an example. If we look at the domain of synthetic biology as an exponential, as a different kind of advanced technology, exponential tech, and we look at that the cost of"
    },
    {
      "end_time": 1277.125,
      "index": 55,
      "start_time": 1258.865,
      "text": " Things like gene sequencing and then the ability to synthesize genomes, gene printing are dropping faster than Moore's law in cost. Well, open science means that the most virulent viruses possible studied in"
    },
    {
      "end_time": 1306.596,
      "index": 56,
      "start_time": 1278.456,
      "text": " context that have ethical review boards getting open published then that's a situation where that knowledge combined with near-term decentralized gene printers is decentralized catastrophe weapons on purpose or even accidentally. There are heaps of examples in the environmental space if we look at our planetary boundaries climate change is the one people have the most awareness of publicly but if you look at the other planetary boundaries like the"
    },
    {
      "end_time": 1332.483,
      "index": 57,
      "start_time": 1307.534,
      "text": " mining pollution or chemical pollution or nitrogen dead zones and oceans or biodiversity loss or species extinction. We've already passed certain tipping points. The question is how runaway are those effects? There was an article published a few months ago on PFAS and PFAS, the fluorinated surfactants forever chemicals as they're popularly called."
    },
    {
      "end_time": 1358.114,
      "index": 58,
      "start_time": 1332.841,
      "text": " that found higher than EPA allowable standards of them in rainwater all around the world, including in snowfall in Antarctica, because they actually evaporate. We're not slowing down on the production of those, and they're endocrine disruptors and carcinogens. That doesn't just affect humans, but affects things like the entirety of ecology and soil microorganisms. It's kind of a humongous effect."
    },
    {
      "end_time": 1387.824,
      "index": 59,
      "start_time": 1358.319,
      "text": " Those are all examples. And I would say right now, I know the topic of our conversation today is AI. AI is both a novel example of a possible catastrophic risk through certain types of utilization. It is also an accelerant to every category of catastrophic risk potentially. So that one has a lot of attention at the moment. So that makes AI different than the rest that you've mentioned? Definitely."
    },
    {
      "end_time": 1412.056,
      "index": 60,
      "start_time": 1388.609,
      "text": " Are you focused primarily on avoiding disaster or moving towards something that's much more heavenly or positive, like a Shangri-La? We have an assessment called the Meta Crisis. There's a more popular term out there right now, the Poly Crisis. We've been calling this the Meta Crisis since before coming across that term. Poly Crisis is the idea that"
    },
    {
      "end_time": 1439.565,
      "index": 61,
      "start_time": 1413.251,
      "text": " The global catastrophic risk that we all need to focus on and coordinate on is not just climate change and it's not just wealth inequality and it's not just the breakdown of Pax Americana and the possibility of war or these species extinction issues, but it's lots of things. There's lots of different global catastrophic risks and that they interact with each other and they're complicated and there could even be cascades between them. We don't have to have climate change"
    },
    {
      "end_time": 1465.896,
      "index": 62,
      "start_time": 1440.469,
      "text": " produce total venusification of the earth to produce a global catastrophic risk. It just has to increase the likelihood of extreme weather events in an area. And we've already seen that happening. Statistics on that seem quite clear. And it's not just total climate change, deforestation, affecting local transpiration and heat in an area can have an effect on, and total amount of pavement, whatever, can have an effect on extreme weather events."
    },
    {
      "end_time": 1496.22,
      "index": 63,
      "start_time": 1466.715,
      "text": " but extreme weather events. I mean, we saw what happened to Australia a couple of years ago when a significant percentage of a whole continent burned in a way that we don't have near term historical precedent for. We saw the way that droughts affected the migration that led to the whole Syrian conflict that got very close to a much larger scale conflict. The Australia situation happened to hit a low population density area, but there are plenty of high population density areas"
    },
    {
      "end_time": 1525.043,
      "index": 64,
      "start_time": 1496.732,
      "text": " that are getting very near the temperatures that create total crop failures, whether we're talking about India, Pakistan, Bangladesh, Nigeria, Iran. If you have massive human migration, the UN currently predicts hundreds of millions of climate-mediated migrants in the next decade and a half, then it's pretty easy under those situations to have resource wars. Those can hit existing political fault lines and then technological amplification."
    },
    {
      "end_time": 1555.094,
      "index": 65,
      "start_time": 1525.538,
      "text": " In the past, we obviously had a lot less people. We only had half a billion people for the entirety of the history of the world until the Industrial Revolution. And then with the Green Revolution and nitrogen fertilizer and oil and like that, we went from half a billion people to a billion people overnight in historical timelines. And we went from those people mostly living on local subsistence to almost all living on dependent upon"
    },
    {
      "end_time": 1582.125,
      "index": 66,
      "start_time": 1555.606,
      "text": " very complicated supply chains now that are six continent mediated supply chain. So that means that there's radically more fragility in the life support systems so that local catastrophes can turn to breakdowns of supply chains, economic effects, et cetera, that affect people very widely. So polycrisis kind of looking at all that, metacrisis adds looking at the underlying drivers of all of them. Why do we have all of these issues?"
    },
    {
      "end_time": 1606.732,
      "index": 67,
      "start_time": 1582.824,
      "text": " and what would it take to solve them not just on point-by-point basis but to solve the underlying basis. So we can see that all of these have to do with coordination failures. We can see that underneath all of them there are things like perverse economic interests. If the cost of the environmental pollution to clean it up was something where in the process of the corporation selling the PFAS as a surfactant for"
    },
    {
      "end_time": 1636.357,
      "index": 68,
      "start_time": 1606.937,
      "text": " waterproofing clothes or whatever. It also had to pay for the cost to clean up its effect in the environment or the oil costs had to clean up the effect on the environment. So you didn't have the perverse incentive to externalize costs onto nature's balance sheet, which nobody enforces. Obviously, we don't know none of those environmental issues, right? That would be a totally different situation. So can we address perverse incentive writ large? That would require fundamental changes when we think of as economy and how we enact that. So political economy"
    },
    {
      "end_time": 1666.988,
      "index": 69,
      "start_time": 1637.5,
      "text": " So we think about those things. So I would say with the Metacrisis Assessment, we'd say that we're in a very novel position with regard to catastrophic risk, global catastrophic risk, because until World War II, there was no technology big enough to cause a global catastrophic risk as a result of dumb human choices or human failure quickly. And then with the bomb, there was. It was the beginning. And that's a moment ago in evolutionary time, right?"
    },
    {
      "end_time": 1692.688,
      "index": 70,
      "start_time": 1667.483,
      "text": " And if we reverse back a little bit before the bomb, until the industrial revolution, we didn't have any technology that could have caused global catastrophic risk even cumulatively. The industrial technology, extracting stuff from nature and turning it into human stuff for a little while before turning it into pollution and trash, where we're extracting stuff from nature in ways that destroy the environment faster than nature can replenish and then turning it into trash and pollution faster than it can be processed,"
    },
    {
      "end_time": 1710.128,
      "index": 71,
      "start_time": 1693.131,
      "text": " in doing exponentially more of that because it's coupled to a economy that requires exponential growth to keep up with interest. That creates an existential risk, it creates a catastrophic risk within about a few centuries of cumulative effects and we're basically at that few century point"
    },
    {
      "end_time": 1737.483,
      "index": 72,
      "start_time": 1710.674,
      "text": " And so that's very new. All of our historical systems for that, our historical systems for thinking about governance in the world didn't have to deal with those effects. We could just kind of think about the world as inexhaustible. And then, of course, when we got the bomb, we're like, all right, this is the first technology that rather than racing to implement, we have to ensure that no one ever uses. In all previous technologies, there was a race to implement it. It was a very different situation."
    },
    {
      "end_time": 1762.193,
      "index": 73,
      "start_time": 1738.609,
      "text": " But since that time, a lot more catastrophic technologies have emerged. Catastrophic technologies in terms of applications of AI and synthetic biology and cyber and various things that are way easier to build than nukes and way harder to control. When you have many actors that have access to many different types of catastrophic technology that can't be monitored, you don't get mutually assured destruction and those types of safeties."
    },
    {
      "end_time": 1790.469,
      "index": 74,
      "start_time": 1763.558,
      "text": " So we'd say that we're in a situation where the catastrophic risk landscape is novel. Nothing in history has been anything like it. And the current trajectory doesn't look awesome for making it through. What it would take to make it through actually requires change to those underlying coordination structures of humanity very deeply. So I don't see a model where we do make it through those that doesn't also become a whole lot more awesome."
    },
    {
      "end_time": 1812.398,
      "index": 75,
      "start_time": 1791.186,
      "text": " And that's what we say, the only other example is to control for catastrophes, you can try to put very strong control provisions. Okay, so now, unlike in the past, people could, or pretty soon have gene drives where they could build pandemic weapons in their basement or drone weapons where they could take out infrastructure targets or now AI weapons even easier."
    },
    {
      "end_time": 1839.258,
      "index": 76,
      "start_time": 1812.961,
      "text": " We can't let that happen, so we need ubiquitous surveillance to know what everybody's doing in their basement, because if we don't, then the world is unacceptably fragile. So we can see catastrophes or dystopias, right, because most versions of ubiquitous surveillance are pretty terrible. And so if you can control decentralized action, if you don't control decentralized action, the current decentralized action is moving towards planetary boundaries and conflict and etc. If you"
    },
    {
      "end_time": 1865.06,
      "index": 77,
      "start_time": 1839.787,
      "text": " control it, then what are the checks and balances on that control? Sorry, what do you mean control decentralized actions? So when we look at what it causes catastrophe, so when we're talking about environmental issues, there's not one group that is taking all the fish out of the ocean or causing species extinction or doing all the pollution. There's a decentralized incentive that lots of companies share."
    },
    {
      "end_time": 1894.326,
      "index": 78,
      "start_time": 1865.674,
      "text": " to do those things. So nobody's intentionally trying to remove all the fish from the ocean. They're trying to meet an economic incentive that they have that's associated with fishing. But the cumulative effect of that is overfishing the ocean, right? So if you try to, if there's a decentralized set of activity where the lack of coordination of everybody doing that, everybody pursuing their own near-term optimum creates the shitty term global minimum for everybody, right? A long-term bad outcome for everybody."
    },
    {
      "end_time": 1922.688,
      "index": 79,
      "start_time": 1894.753,
      "text": " If you try to create some centralized control against that, that's a lot of centralized power. And where are the checks and balances on that power? Otherwise, how do you create decentralized coordination? And similarly, if you're looking at things like in an age where terrorism can get exponential technologies, and you don't want exponentially empowered terrorism with catastrophe weapons for everyone,"
    },
    {
      "end_time": 1951.613,
      "index": 80,
      "start_time": 1923.899,
      "text": " To be able to see what's being developed ahead of time, does that look like the degree of surveillance that nobody wants? To be able to control those things not happening, right? That's what I mean. So how to prevent the catastrophes, if the catastrophes are currently the result of the human motivational landscape in a decentralized way, if the solution is a centralized method powerful enough to do it, where are the checks and balances on that power? So a future that is neither"
    },
    {
      "end_time": 1982.176,
      "index": 81,
      "start_time": 1952.466,
      "text": " cascading catastrophes nor control dystopias is the one that we're interested in. And so, yes, I would say the whole focus is that this is now AI comes back into the topic because a lot of people see possibilities for a very pro-topian future with AI where it can help solve coordination issues and solve lots of resource allocation issues. It also, and it can, it can also make lots of things. The catastrophe is worse and dystopia is worse. It's actually kind of unique in being able to make both of those things more powerful."
    },
    {
      "end_time": 2011.544,
      "index": 82,
      "start_time": 1983.353,
      "text": " Can you explain what you mean when you say that the negative externalities are coupled to an economy that depends on exponential growth? Yeah. If you think about it just in the first principle way, the idea is supposed to be something like there are real goods and services that people want that improve their life that we care about."
    },
    {
      "end_time": 2041.561,
      "index": 83,
      "start_time": 2012.892,
      "text": " The services might not be physical goods directly. They might be things humans are doing, but they still depend upon lots of goods, right? If you are going to provide a consultation over a Zoom meeting, you have to have laptops and satellites and power lines and mining and all those things. So you can't separate the service industry from the goods industry. So there's physical stuff that we want. And to mediate the access to that,"
    },
    {
      "end_time": 2070.145,
      "index": 84,
      "start_time": 2042.108,
      "text": " and the exchange of it, we think about it through a currency. So it's supposed to be that there's this physical stuff and the currency is a way of being able to mediate the incentives and exchange of it. But the currency starts to gain its own physics, right? So we make a currency that has no intrinsic value, that is just representative of any kind of value we could want. But the moment we do something like interest, where we're now exponentiating the monetary supply, independent"
    },
    {
      "end_time": 2097.568,
      "index": 85,
      "start_time": 2070.435,
      "text": " of an actual automatic growth of goods or services to not debase the value of the currency, you have to also exponentiate the total amount of goods and services. And everybody's seen how compounding interest works, right? Because you have a particular amount of interest and then you have interest on that amount of interest, so you do get an exponential curve. Obviously, that's just the beginning. Financial services as a whole and all of the dynamics where you have money making on money,"
    },
    {
      "end_time": 2125.572,
      "index": 86,
      "start_time": 2098.404,
      "text": " mean that you expand the monetary supply on an exponential curve, which was based on the idea that there is a natural exponential curve of population anyways, and there's a natural growth of goods and services correlated. But that was true at an early part of a curve that was supposed to be an S curve, right? You have an exponential curve that in flex goes into an X curve, but we don't have the S curve part of the financial system planned."
    },
    {
      "end_time": 2148.899,
      "index": 87,
      "start_time": 2126.015,
      "text": " The financial system has to keep doing exponential growth or it breaks. And not only is that key to the financial system, because what does it mean to have a financial system without interest? Say it's a very deeply different system. Formalizing that was also key to our solution to not have World War III. The history of the world"
    },
    {
      "end_time": 2176.698,
      "index": 88,
      "start_time": 2149.343,
      "text": " in terms of war does not look great, that the major empires and major nations don't stay out of violent conflict with each other very long. And World War I was supposed to be the war that ended all wars, but it wasn't. We had World War II. Now this one really has to be the war that ends all major superpower wars because of the bomb. We can't do that again. And the primary basis of wars, one of the primary bases had been resources."
    },
    {
      "end_time": 2195.572,
      "index": 89,
      "start_time": 2177.312,
      "text": " Which was a particular empire wanted to grow and get more stuff, and that meant taking it from somebody else. And so the idea of if we could exponentially grow global GDP, everybody could have more without taking each other's stuff. It's so highly positive sum that we don't have to go zero sum on more."
    },
    {
      "end_time": 2225.009,
      "index": 90,
      "start_time": 2196.374,
      "text": " So the whole post-World War II banking system, the Bretton Woods monetary system, et cetera, was part of the, how do we not have World War, along with mutually assured destruction, the UN and other international, intergovernmental organizations. But that let's exponentially grow the monetary system also meant that if you have a whole bunch more dollars and you don't have more goods and services, the dollars become worth less and it's just inflation and debasing the currency."
    },
    {
      "end_time": 2252.039,
      "index": 91,
      "start_time": 2225.725,
      "text": " So now you have an artificial incentive to keep growing the physical economy, which also means that the materials economy has to have an exponential amount of nature getting turned from nature into stuff, into trash and pollution in the linear materials economy. And you don't get to exponentially do that on the finite biosphere forever. So the economy is tied to interest and that's at the root of this of what you just explained, not at the root of every catastrophe."
    },
    {
      "end_time": 2274.275,
      "index": 92,
      "start_time": 2254.377,
      "text": " Interest is the beginning of what all of the financial services do, but there's an embedded growth obligation of which interest is the first thing you can see on the economic system. The embedded growth obligation that creates exponentiation of it tied to the physical world where exponential curves don't get to run forever is one of the problems."
    },
    {
      "end_time": 2297.79,
      "index": 93,
      "start_time": 2275.111,
      "text": " There are a handful. This is when we're thinking about a crisis. What are the underlying issues? This is one of the underlying issues. There's quite a few other ones that we can look at to say, if we really want to address the issues, we have to address it at this level. What's the issue with transitioning from something that's exponential to sub-exponential when it comes to the economy? What's the issue with it?"
    },
    {
      "end_time": 2334.701,
      "index": 94,
      "start_time": 2305.725,
      "text": " There's a bunch of ways we could go. There is an old refrain from the hippie days that seems very obvious, I think, as soon as anyone thinks about it, which is that you can't run an exponential growth system on a finite planet forever. That seems kind of obvious and intuitive. Because it's so obvious and intuitive, there's a lot of counters to it. One counter is we're not going to run it on the finite planet forever. We're going to become an interplanetary species, mine asteroids, ship our ways to the sun, blah, blah, blah."
    },
    {
      "end_time": 2364.48,
      "index": 95,
      "start_time": 2335.35,
      "text": " I don't think that we are anywhere near close, independent of the ethical or aesthetic argument on if us obliterating our planet's carrying capacity and then exporting that to the rest of the universe is a good or lovely idea or not, independent of that argument. The timelines by which that could actually meet the humanity superorganism growing needs relative to the timelines where this thing starts failing don't work."
    },
    {
      "end_time": 2394.991,
      "index": 96,
      "start_time": 2365.418,
      "text": " So that's not an answer. That said, the attempt to even try to get there quicker is a utilization of resources here that is speeding up the breakdown here faster than it is providing alternatives. The other answer people have to why there could be an exponential growth forever is because digital, right? That more and more money is a result of software being created, a result of"
    },
    {
      "end_time": 2424.787,
      "index": 97,
      "start_time": 2395.947,
      "text": " digital entertainment being created and that there's a lot less physical impact of that. And so we can keep growing digital goods because it doesn't affect the physical plan and physical supply chain. So we can keep the exponential growth up forever. That's very much the kind of Silicon Valley take on it. Of course, that has an effect. It does not solve the problem. And it's pretty straightforward to see why."
    },
    {
      "end_time": 2453.951,
      "index": 98,
      "start_time": 2425.52,
      "text": " which is for, let's go ahead and say software in particular. Does software have to run on hardware where the computer systems and server banks and satellites and et cetera require massive mining, which also requires a financial system and police and courts to maintain the entire cybernetic system that runs all that? Yes, it does."
    },
    {
      "end_time": 2483.695,
      "index": 99,
      "start_time": 2454.787,
      "text": " does a lot more compute require more of that more atoms adjacent services energy yes but also for us to consider software valuable it's either because we're engaging with what's we're engaging with what it's doing directly so that's the case in entertainment or education or something but then it is interfacing with the finite resource called human attention of which there is a finite amount"
    },
    {
      "end_time": 2508.268,
      "index": 100,
      "start_time": 2484.77,
      "text": " or because we're not necessarily being entertained or educated or engaging with it, but it's doing something for us, again, to consider valuable. It is doing something to the physical world. So the software is engaging, say, supply chain optimization or new modeling for how to make better transistors or something like that."
    },
    {
      "end_time": 2533.302,
      "index": 101,
      "start_time": 2509.292,
      "text": " But then it's still moving atoms around using energy and physical space, which is a finite resource. If it is not either affecting the physical world or affecting our attention, why would we value it? We don't. So it still bottoms out on finite resources. So I can't just keep producing an infinite amount of software where you get more and more content that nobody has time to watch."
    },
    {
      "end_time": 2561.493,
      "index": 102,
      "start_time": 2533.729,
      "text": " and more and more designs for physical things that we don't have physical atoms for or energy for, you get a diminishing return on the value of it if it's not coupled to things that are finite, right? The value of it is in modulating things that are also finite. So there's a coupling coefficient there. You still don't get an exponential curve. So what we just did is say the old hippie refrain, you can't run an exponential economy on a finite planet forever. The alt, the counters to it don't hold."
    },
    {
      "end_time": 2590.043,
      "index": 103,
      "start_time": 2562.159,
      "text": " What about mind uploading or some computer brain interface to allow us to have more attention exponentially? That's almost like the hybrid of the other two, which is get beyond this planet and do it more digitally, so get beyond this brain."
    },
    {
      "end_time": 2619.906,
      "index": 104,
      "start_time": 2590.759,
      "text": " become digital gods in the singularity universe. Again, I think there are pretty interesting arguments we can have, ethically, aesthetically, and epistemically, about why that is neither possible nor desirable. But independent of those,"
    },
    {
      "end_time": 2648.473,
      "index": 105,
      "start_time": 2621.886,
      "text": " I don't think it's anywhere close. And same like the multi-planetary species, it is nowhere near close enough to address any of the timelines we have by which economy has to change because the growth imperative on the economy as is, is moving us towards catastrophic tipping points. So if it were close, would that change your assessment or you still have other issues? If it were close, then we would have to say,"
    },
    {
      "end_time": 2672.056,
      "index": 106,
      "start_time": 2649.94,
      "text": " First, that is implying that we have a good reason to think that it's possible. And that means all the axioms that consciousness is substrate independent, that consciousness is purely a function of compute, strong computationalism holds,"
    },
    {
      "end_time": 2701.408,
      "index": 107,
      "start_time": 2672.363,
      "text": " that we could map the states of the brain and or if we believe in embodied cognition, the physiology adequately to represent that informational system on some other substrate that that could operate with an amount of energy that is and substrate that's possible, blah, blah, blah. So first we have to believe that's possible. I would question literally every one of the axioms or assumptions I just said. We're going to get to that."
    },
    {
      "end_time": 2728.166,
      "index": 108,
      "start_time": 2702.142,
      "text": " we would say, is it desirable? And how do we know? How ahead of time? And now you get something very much like, how do I know that the AI is sentient? Which for the most part on all AI risk topics, whether it's sentient or not is irrelevant, whether it does stuff is all that matters. But how do you tell if it's sentient and all of the"
    },
    {
      "end_time": 2756.51,
      "index": 109,
      "start_time": 2728.746,
      "text": " are actually really hard because what we're asking is how can we use third-person observation to infer something about the nature of first-person, given the ontological difference between them? So how would we know that that future is desirable? Are there safe-to-fail tests and what would we have to test to know it to start making that conversion? But I don't"
    },
    {
      "end_time": 2772.654,
      "index": 110,
      "start_time": 2757.995,
      "text": " I don't think we have to answer any of those questions because I don't think that anybody that is working on whole brain emulation thinks that we are close enough that it would address the timeline of the economy issues that you're addressing."
    },
    {
      "end_time": 2799.821,
      "index": 111,
      "start_time": 2773.404,
      "text": " Okay, so"
    },
    {
      "end_time": 2827.91,
      "index": 112,
      "start_time": 2802.142,
      "text": " This is now much more a proper theory of everything conversation than the topic that we intended for the day, which is about AI risk. So what I will do is say briefly the conclusion of my thoughts on this without actually going into it in depth, but I would be happy to explore that at some point. I think"
    },
    {
      "end_time": 2856.647,
      "index": 113,
      "start_time": 2828.643,
      "text": " that how I come to my position on it to try to do a kind of proper construction takes a while. So briefly, I'll say I'm not a strong computationist, meaning don't believe that mind, universe, sentience, qualia is purely a function of computation."
    },
    {
      "end_time": 2883.712,
      "index": 114,
      "start_time": 2857.79,
      "text": " I am not an emergent physicalist that believes that consciousness is an epiphenomena of non-conscious physics, that in the same way that we have weak emergence, more of a particular property through certain kind of combinatorics, or strong emergence, new properties emerging out of some type of interaction where that hadn't occurred before, like a cell respirating while none of the molecules that make it up respirate."
    },
    {
      "end_time": 2910.657,
      "index": 115,
      "start_time": 2884.531,
      "text": " I believe in weak emergence. That happens all the time. You get more of certain qualities. It happens in metallurgy when you combine metals where the combined tensile strength or shearing strength or whatever is more than you would expect as a result of the nature of how the molecular lattice is formed. You get more of a thing of the same type. I believe in strong emergence, which is you get new types of things that you didn't have before, like respiration and replication out of parts, none of which do that."
    },
    {
      "end_time": 2940.93,
      "index": 116,
      "start_time": 2911.834,
      "text": " But those are all still in the domain of third person, assessable things. The idea of radical emergence, that you get the emergence of first person out of third person, or third person out of first person, which is idealism on one side and physicalism on the other, I don't buy either of. I think that idealism and physicalism are similar types of reductionism, where they both take certain ontological assumptions to bootload their epistemology and then get self"
    },
    {
      "end_time": 2970.845,
      "index": 117,
      "start_time": 2941.578,
      "text": " referential dynamics. So I don't think that if a computational system gets advanced enough automatically consciousness pops out of it. That's one. Two, I do think that the process of a system self-organizing is fundamentally connected to the nature of experience of selfness and things that are being designed and are not self-organizing where the"
    },
    {
      "end_time": 2991.323,
      "index": 118,
      "start_time": 2971.63,
      "text": " Boundary between the system and its environment that exchanges energy and information and matter across the boundary is a Autopoetic process. I do believe that's fundamental to the nature of things that have self other recognition and"
    },
    {
      "end_time": 3022.09,
      "index": 119,
      "start_time": 2992.415,
      "text": " on substrate independence. I do believe that carbon and silicon are different in pretty fundamental ways that don't orient to the same types of possibilities. I think that that's actually pretty important to the AI risk argument. I'll just go ahead and say those things. I also don't think"
    },
    {
      "end_time": 3050.418,
      "index": 120,
      "start_time": 3022.807,
      "text": " I believe that embodied cognition in the Demasio sense is important and that a scan of purely brain states isn't sufficient. I also don't think that a scan of brain states is possible even in theory. Sorry to interrupt. I know you said you don't believe it's possible. What if it is? And you're able to scan your brain state and body state. So we take into account the embodied cognition. Sure."
    },
    {
      "end_time": 3084.94,
      "index": 121,
      "start_time": 3056.937,
      "text": " I think that, okay, it's not simply a matter of scanning the brain state. We need to scan the rest of the central nervous system. No, we also have to get the peripheral nervous system. No, we have to get the endocrine system. No, all of the cells have the production of and reception of neuroendocrine type things. We have to scan the whole thing. Does that then extend to the"
    },
    {
      "end_time": 3115.333,
      "index": 122,
      "start_time": 3085.486,
      "text": " microbiome, virome, etc. I would argue yes. Does it then extend to the environment? I would argue yes. Where does that stop its extension is actually a very important question. So I would take the embodied cognition a step further. The other thing is Stuart Kaufman's arguments about quantum amplification to the mesoscopic level."
    },
    {
      "end_time": 3145.691,
      "index": 123,
      "start_time": 3115.998,
      "text": " that quantum mechanical events don't just fully cancel themselves out at the subatomic level and at the level of brains. Everything that is happening is straightforwardly classical, but that there is quantum mechanical, i.e. some fundamental kind of indeterminism built in phenomena that end up affecting what happens at the level of molecules."
    },
    {
      "end_time": 3174.002,
      "index": 124,
      "start_time": 3146.459,
      "text": " Now then, one can say, well, does that just mean we have to add a certain amount of a random function in, or is there something else? This is a big rabbit hole, I would say, for another time, because then you get into quantum entanglement and coherence, so you get something that is neither perfectly random, meaning without pattern. You get a born distribution even on a single one, but it's also not deterministic or with hidden variables."
    },
    {
      "end_time": 3197.688,
      "index": 125,
      "start_time": 3174.377,
      "text": " Do I think that what's happening in the brain-body system is not purely deterministic and also as a result of that means you could not measure or scan it even in principle in that kind of Heisenberg sense. Yes, I think that. Have you heard of David Walpart and his limits on inference systems, inference machines, sorry? I have not studied his work."
    },
    {
      "end_time": 3213.353,
      "index": 126,
      "start_time": 3198.012,
      "text": " Let me talk about the economy, which only on your podcast would happen."
    },
    {
      "end_time": 3237.483,
      "index": 127,
      "start_time": 3213.865,
      "text": " somehow this exponential curve starts to get to where the S is the top of the S that the halting or the slowing down of the economy is something that's so catastrophic and calamitous rather than something that would mutate and if we need to just at that point as it starts to slow down we make minor changes here and there is this something that's entirely new like will they all come crashing down"
    },
    {
      "end_time": 3263.797,
      "index": 128,
      "start_time": 3239.701,
      "text": " Let me make the question clear. It sounds like, look, the economy is tied to exponential growth. We can't grow exponentially. Virtually no one believes that. So at some point, and let's just imagine it's three decades, just to give some numbers. So at some point, three decades from now, this exponential curve for all of the economy will start to show its legs and start to weaken and we'll see that it's nearing the S part."
    },
    {
      "end_time": 3281.288,
      "index": 129,
      "start_time": 3263.797,
      "text": " So what does that mean that there's been fire in the streets that the buildings don't work that the water doesn't run anymore like what will happen? Okay So people often make jokes about"
    },
    {
      "end_time": 3311.817,
      "index": 130,
      "start_time": 3281.834,
      "text": " physicists in particular starting to look at biology and language and society and modeling in particularly funny reductionist ways because they tried to map the entire economy through the second law of thermodynamics or something like that. And because what we're really talking about is the maximally complex and anthropocomplex thing and embedded complexity we can because we're talking about all of human motives and how do humans respond to the idea that"
    },
    {
      "end_time": 3328.677,
      "index": 131,
      "start_time": 3312.415,
      "text": " there's fundamentally limits on the growth possible to them, or there's less stuff possible for them, or whether it's issues that are associated with"
    },
    {
      "end_time": 3355.179,
      "index": 132,
      "start_time": 3329.275,
      "text": " environmental extraction. So here's one of the classic challenges is that the problems, the catastrophic risks, many of them in the environmental category are the result of cumulative action long-term where the upsides are the result of individual action short-term and the asymmetry between those is particularly problematic. That's why you get this collective choice-making challenge, meaning if I cut down a tree for timber"
    },
    {
      "end_time": 3382.688,
      "index": 133,
      "start_time": 3356.169,
      "text": " I don't obviously perceive the change to the atmosphere or to the climate or to watersheds or to anything. But my bank account goes up through being able to sell that lumber immediately. And the same is true if I fish or if I do anything like that. But when you run the Kantian categorical imperative across it and you have the movement from half a billion people doing it to a pre-industrial revolution to eight billion,"
    },
    {
      "end_time": 3408.933,
      "index": 134,
      "start_time": 3383.097,
      "text": " and you have something like in the industrial world, a hundred X resource per capita consumption, uh, just calorically measured today than at the beginning of the industrial revolution. Then you start realizing, okay, well, the cumulative effects of that don't work. They break the, they break the planet and they start creating, um, tipping points that auto propagate in the wrong direction. But that, but no individual person or even local area doing the thing"
    },
    {
      "end_time": 3421.442,
      "index": 135,
      "start_time": 3409.753,
      "text": " recognizes their action is driving that downside. And how do you get global enforcement of the thing? And if you don't get global enforcement, why should anyone let themselves be curtailed and other people aren't being curtailed and that'll give them game theoretic advantage?"
    },
    {
      "end_time": 3444.002,
      "index": 136,
      "start_time": 3422.688,
      "text": " so this is actually there's a handful of asymmetries that are important to understand with regard to risk all right we've covered plenty so far and so it's fruitful to have a brief summary we've talked about the faulty foundation of our monetary system daniel argues that post world war two especially our economic system has not only encouraged but been dependent on exponential monetary growth"
    },
    {
      "end_time": 3464.906,
      "index": 137,
      "start_time": 3444.002,
      "text": " and this can't continually occur we've also talked about the digital escape plan and how this is an illusion at least in Daniel's eye he believes that digital growth has physical costs because their hardware their human attention limits their finite resources linear resources as he calls them though i have my issues with the term linear resource because technically anything is linear when measured against itself"
    },
    {
      "end_time": 3487.995,
      "index": 138,
      "start_time": 3464.906,
      "text": " We've also talked about how moving to Mars won't save us, us being civilization. Daniel believes that the idea of becoming an interplanetary species to escape resource limitations is unrealistic, perhaps even ethically questionable. We've also talked about how mind uploading is not what it's cracked up to be, it may not occur, and even if it does, it's not the answer. Because it's either unfeasible, but even if it's feasible, Daniel believes it to be undesirable."
    },
    {
      "end_time": 3516.92,
      "index": 139,
      "start_time": 3487.995,
      "text": " Another resource as we expand our digital footprint is the privacy of our digital resources. You can see this being recognized even by OpenAI as they recently announced an incognito mode. And this is where our sponsor comes in. Do you ever get the feeling that your internet provider knows more about you than your own mother? It's like they're in your head. They can predict your next move. When I'm researching complicated physics topics or checking the latest news or just in general what I want privacy on, I don't want to have to go and research which VPN is best. I don't want to be bothered by that."
    },
    {
      "end_time": 3538.763,
      "index": 140,
      "start_time": 3516.92,
      "text": " Well, I and you can put those fears to rest with Private Internet Access, the VPN provider that's got your back. With over 30 million downloads, they're the real deal when it comes to keeping your online activity private. And they've got apps for every operating system. You can protect 10 of your devices at once, even if you're unfortunate enough like me to love Windows."
    },
    {
      "end_time": 3564.428,
      "index": 141,
      "start_time": 3538.763,
      "text": " And if you're worried about strange items popping up in your search history, don't worry. I'm not judging. Private Internet Access comes in here as they encrypt your connection. They hide your IP address so your ISP doesn't have access to those strange items in your history. They make you a ghost online. It's like Batman's cave before your browsing history. With Private Internet Access, you can keep your odd internet searches, let's say, on the down low."
    },
    {
      "end_time": 3590.111,
      "index": 142,
      "start_time": 3564.428,
      "text": " It's like having your own personal confessional booth, except you never need to talk to a priest. So why wait? Head over to p-i-a-v-p-n dot com slash toe t-o-e and get yourself an 82, an 82% discount. That's less than the price of a coffee per month. And let's face it, your online privacy is worth way more than a latte. That's p-i-a-v-p-n dot com slash t-o-e now and get the protection you deserve."
    },
    {
      "end_time": 3619.019,
      "index": 143,
      "start_time": 3590.111,
      "text": " Brilliance is a place where there are bite-sized interactive learning experiences for science, engineering, and mathematics. Artificial intelligence in its current form uses machine learning, which uses neural nets, often at least, and there are several courses on Brilliance websites teaching you the concepts underlying neural nets and computation in an extremely intuitive manner that's interactive, which is unlike almost any of the tutorials out there. They quiz you. I personally took the course on random variable distributions and knowledge and uncertainty,"
    },
    {
      "end_time": 3644.48,
      "index": 144,
      "start_time": 3619.019,
      "text": " Because I wanted to learn more about entropy, especially as there may be a video coming out on entropy, as well as you can learn group theory on their website, which underlies physics, that is SU3 x SU2 x U1 is the Standard Model Gauge Group. Visit brilliant.org slash TOE TOE to get 20% off your annual premium subscription. As usual, I recommend you don't stop before four lessons. You have to just get wet."
    },
    {
      "end_time": 3662.022,
      "index": 145,
      "start_time": 3644.48,
      "text": " You have to try it out. I think you'll be greatly surprised at the ease at which you can now comprehend subjects you previously had a difficult time grokking. The bad is the material from which the good may learn."
    },
    {
      "end_time": 3693.848,
      "index": 146,
      "start_time": 3663.985,
      "text": " So this is actually, there's a handful of asymmetries that are important to understand with regard to risk. One is this one that I'm saying, which is you have risks that are the result of long-term cumulative action, but that you actually have to change individual action because of that. But the upside, the benefit, the individual making that action realizes the benefit directly. And so this is a classic tragedy of the commons type issue, right? But tragedy of the commons at a, not just local scales, but at global scales."
    },
    {
      "end_time": 3723.319,
      "index": 147,
      "start_time": 3694.77,
      "text": " Some of the other asymmetries are particularly important is people who focus on the upside, who focus on opportunity, do better game theoretically for the most part than people who focus on risk when it comes to new technologies and advancement and progress in general. Because if someone says, hey, we thought Vioxx or DDT or any number of things were good ideas or leaded gasoline, they ended up being really bad later."
    },
    {
      "end_time": 3735.572,
      "index": 148,
      "start_time": 3723.814,
      "text": " We want to do really good long-term safety testing regarding first, second, third order effects of this. They're going to spend a lot of money and knock it first to market and then probably decide the whole thing wasn't a good idea at all."
    },
    {
      "end_time": 3765.623,
      "index": 149,
      "start_time": 3736.084,
      "text": " or if they do decide how to do a safe version, it takes them a very long time. The person says, no, the risks aren't that bad. Let me show you does a bullshit job of risk analysis as a box checking process and then really emphasizes the upsides is going to get first mover advantage, make all the money. They will privatize the gains, socialize the losses. Then when the problems get revealed long time later and are unfixable, that will have already happened. So these are just examples of some of the kind of choice making asymmetries."
    },
    {
      "end_time": 3782.381,
      "index": 150,
      "start_time": 3766.34,
      "text": " Totally. Not a particular corporation, but a particularly important consideration in the entire topic."
    },
    {
      "end_time": 3804.087,
      "index": 151,
      "start_time": 3782.688,
      "text": " One view is that Google is not coming out with something that's competitive, like Bard is not competitive. I think even Google would admit that. And so one view is that, well, they're highly testing. Another one, I've spoken to some people behind the scenes and they say Google doesn't have anything. They don't have anything like chat GPT. It's BS when they say so. Even OpenAI doesn't know why chat GPT works."
    },
    {
      "end_time": 3829.189,
      "index": 152,
      "start_time": 3804.087,
      "text": " I have heard a lot of things about the"
    },
    {
      "end_time": 3855.282,
      "index": 153,
      "start_time": 3829.735,
      "text": " choices that both companies made to not release stuff and safety studies that they did, and then what influenced the choices to release stuff inside of Microsoft and OpenAI and how Google's handling it. I don't know that these stories are the totality of information on it that's relevant. Do I think that economic"
    },
    {
      "end_time": 3880.128,
      "index": 154,
      "start_time": 3855.998,
      "text": " forcing functions have played a role in something that affected the safety analysis totally. Do I think that that is an unacceptably dumb thing on a topic that has this level of safety risk associated totally. So now getting into what is unique about AI risk."
    },
    {
      "end_time": 3907.159,
      "index": 155,
      "start_time": 3880.486,
      "text": " What is unique about it relative to all other risks? People are saying things like we need an FDA for AI right now, which I would argue is both true and a profoundly inadequate analogy because a single new chemical that comes out is not an agent. It is not a dynamic thing that continues to respond differently to huge numbers of new unpredictable stimuli."
    },
    {
      "end_time": 3929.019,
      "index": 156,
      "start_time": 3907.568,
      "text": " So how you do the assessment of the phase space of possible things is totally different. It would probably be good to dive into what is the risk space of AI, why is it unique, and how, given all of the differences of concern, how to framework and think about that properly."
    },
    {
      "end_time": 3958.746,
      "index": 157,
      "start_time": 3929.804,
      "text": " What else is unique about it? And why can't we have an FDA or a UN version of an FDA for AI? And when I say UN, sorry, what I mean is global. Yeah. Well, obviously you bring up UN and say global because you have to have global regulation on something like that, right? In the same way that when people talk about climate regulation, if we were, if any country, if any"
    },
    {
      "end_time": 3979.292,
      "index": 158,
      "start_time": 3960.333,
      "text": " Group of countries was to try to price carbon properly, meaning what does it take to renewably produce those hydrocarbons and what does it take to in real time fix all of the effects, both sequester the CO2, clean up the oil spills, whatever it is."
    },
    {
      "end_time": 4009.718,
      "index": 159,
      "start_time": 3980.009,
      "text": " the price of oil would become high enough with those costs internalized that oil then as an input to industries, literally every industry would be non-profitable. And so even if any country was to try to make some steps in the direction of internalizing cost and other ones didn't, then the other ones who continue to externalize their costs get so much further ahead in terms of GDP that can be applied to militaries and surplus of many different kinds and advancing exponential tech."
    },
    {
      "end_time": 4033.575,
      "index": 160,
      "start_time": 4010.179,
      "text": " But insofar as those are also competing entities for world resources and control, that's not a viable thing. This is true for AI as well. And this then starts to hit this other issue, which is if you can't regulate something on a purely national level, because it's not just how does it affect the people in the nation, but how does it affect the nation's capability to interact with other nations,"
    },
    {
      "end_time": 4053.063,
      "index": 161,
      "start_time": 4035.077,
      "text": " Now you get to the creation of the UN was kind of the recognition in the existence in the emergence of World War II that nation state governance alone was not adequate to prevent World War. Obviously, that's why the League of Nations came after World War I, and it was not strong enough to prevent World War II."
    },
    {
      "end_time": 4081.493,
      "index": 162,
      "start_time": 4053.968,
      "text": " Now you get to the topic of why so many people are super concerned about global government and don't want global government. And they'll say things like the risks are being exaggerated and blown out of proportion to be able to drive control paradigms. And the people who want to have a one world government or a powerful nation governments exaggerate the risks so that they can drive control paradigms where they will be the one in the control side. This can be excessive paranoia, but it's also"
    },
    {
      "end_time": 4106.92,
      "index": 163,
      "start_time": 4082.637,
      "text": " a really realistic and founded consideration, which is, are there any radical asymmetries of power where the side that had all the power used it really well? Historically, it doesn't look that good, right? And so we see a reason to be concerned about something like a one world government that has no possible checks and balances."
    },
    {
      "end_time": 4136.084,
      "index": 164,
      "start_time": 4107.654,
      "text": " But there's also a concern about not having anything where you get some type of global governance, if not government, meaning some unified establishment that has monopoly of violence, at least governance, meaning some coordination where everyone is not left in a multipolar trap saying, we can't bind our behavior because they won't. And if they won't, then we have to race ahead, right? We can't stop overfishing because the fish will all get killed because they're doing the thing anyway. So not only will we not stop,"
    },
    {
      "end_time": 4160.026,
      "index": 165,
      "start_time": 4136.459,
      "text": " We will actually race to do it faster than them so they don't get more resource relative to us, those types of issues. So obviously with regard to the environment, we call it a tragedy of the commons with regard to the development of possible military technology, we call it an arms race. Both of them are examples of social traps or multipolar traps. Briefly, why do you call it a multipolar trap?"
    },
    {
      "end_time": 4184.735,
      "index": 166,
      "start_time": 4160.981,
      "text": " Social trap is a term used in the social sciences quite a lot to indicate a coordination failure of this type where each agent pursuing their own near-term rational interest creates a situation that moves the entire global situation long-term to a suboptimal equilibria for everybody. There's a lot of work in various fields of social science on social traps."
    },
    {
      "end_time": 4214.497,
      "index": 167,
      "start_time": 4185.111,
      "text": " The first time I'm aware of the term multipolar trap entering the conversation was in the great article called Meditations on Moloch by Scott Alexander, where I believe he's the one who coined the term multipolar trap there, and it's pretty close to a social trap. If I was going to define a distinction, it might be something like, in a classic tragedy of the commons scenario,"
    },
    {
      "end_time": 4243.643,
      "index": 168,
      "start_time": 4214.889,
      "text": " where everyone is utilizing a common wealth resource like say fishing or cutting down trees in a forest or whatever. You're not necessarily in the situation where everyone is racing to do it faster than the other person to destroy it, just them simply not curtailing their own behavior. And yet you have a resource per capita consumption growth and a total population growth such that the environment can't deal with it."
    },
    {
      "end_time": 4264.531,
      "index": 169,
      "start_time": 4244.241,
      "text": " You still end up getting environmental devastation. But as soon as you kind of move over into, hey, even if I don't cut down the trees or I don't fish, the other side is going to, so I literally don't have the ability to protect the forest, but I do have the ability to cut down some of it, benefit myself or our people, our tribe, our nation or whatever it is."
    },
    {
      "end_time": 4281.459,
      "index": 170,
      "start_time": 4265.06,
      "text": " And if I don't, the other guys will break it down anyways, but they'll also use the economic advantage of that against us and whatever the next rivalrous conflict is. So not only do I have to keep doing it, but I have to race to do it faster than they do. I actually have to apply innovation now."
    },
    {
      "end_time": 4310.23,
      "index": 171,
      "start_time": 4281.886,
      "text": " And so this is where you get an accelerating dynamic. And if you don't just have two actors doing this, but you have many actors doing this where it's very hard to be able to bind it, because how do you ensure that all the actors are keeping the agreement? You have to make some non-proliferation agreement. You have to have some way of ensuring that they're all keeping it, and you have to have some enforceable deterrent if anyone violates it. Those happen, but it's not trivial. It's not trivial to enact those. And it's particularly, so let's say"
    },
    {
      "end_time": 4333.251,
      "index": 172,
      "start_time": 4310.674,
      "text": " We've achieved that when it comes to nukes in some ways, though at the beginning of the current post-World War II system, there was only two superpowers with nukes and now there's roughly nine countries with them. There are not a hundred countries with them. There aren't even 30, because we've done a really intense job of ensuring that Iran and many countries that want nukes don't get them. And why?"
    },
    {
      "end_time": 4349.855,
      "index": 173,
      "start_time": 4333.865,
      "text": " There are not uranium mines everywhere. You can see where they are. Uranium enrichment takes massive capability that you can literally see from space. There's radioactive activity associated. So it's somewhat easy to monitor that that's happening."
    },
    {
      "end_time": 4377.944,
      "index": 174,
      "start_time": 4350.725,
      "text": " This is not true at all with the newer technologies that provide more catastrophic capabilities. So obviously with AI right now and the regulation of it, there are conversations about like, we need to monitor all large GPU clusters or something like that, which to some degree can be done. But in terms of applications, it takes a very large GPU cluster to develop an LLM. It takes a very small one to run that LLM afterwards, right?"
    },
    {
      "end_time": 4406.749,
      "index": 175,
      "start_time": 4378.456,
      "text": " and then can you run it for destructive purposes? And it takes a very large capability to advance something like a CRISPR or a new type of synthetic bio knowledge. It doesn't take that much to be able to reverse engineer it after it's been developed. So this brings up this very important point of"
    },
    {
      "end_time": 4429.121,
      "index": 176,
      "start_time": 4408.814,
      "text": " When technology is built, there's this general refrain that all technologies dual use, right? Meaning that if it wasn't, sometimes it's developed for military purpose first and then becomes used for civilian normal market purposes. But if it's being developed for some non-military purposes, probably a militarized application."
    },
    {
      "end_time": 4454.923,
      "index": 177,
      "start_time": 4429.292,
      "text": " That's what's meant with dual use is military versus non-military. So it's not the same as this is a double-edged sword. It's positive and negative. It's not the same as that. Yeah, it is. What it's saying is you're developing this for some purpose, but it has other purposes too, right? And it has purposes that can be used for violence or conflict or destruction or something. And well, that is historically mostly used with the concept of has a military application can be used to advance war and killing and things like that."
    },
    {
      "end_time": 4482.21,
      "index": 178,
      "start_time": 4455.23,
      "text": " Sorry, when I was thinking of military, I was also thinking in terms of pure defense, not just defense that also can be something that can attack. Yeah. Yeah, the pure defense only military. It starts becoming part of most military doctrines that"
    },
    {
      "end_time": 4509.957,
      "index": 179,
      "start_time": 4483.49,
      "text": " Viable defense requires things that look like escalation, but that's another topic as well. So it's not just that all technologies are dual use, it's that they have many uses. You develop a technology and I think a good way to think about, so now this is a little bit of theory of tech. Did we close multipolar trap?"
    },
    {
      "end_time": 4529.326,
      "index": 180,
      "start_time": 4510.469,
      "text": " Well, you mentioned that it first came up in Scott Erickson's or Alexander. Yeah. And so basically the concept is you have many different agents who all of them pursuing their own rational interest and maybe they can't even avoid it because it would be so irrational. It would be so bad for them game theoretically."
    },
    {
      "end_time": 4559.053,
      "index": 181,
      "start_time": 4529.991,
      "text": " that the effect of each of the agents pursuing their own rational interests produces a global effect that is somewhere between catastrophic or at least far from the global optimum if they could coordinate better. So this is basically a particular type of multi-agent coordination failure. And we see this all over in the tragedy of the commons as an example, a market race to the bottom like happens in marketing and attention currently. It's an example. And then arms race is another example. Those would all be examples of a kind of multipolar trap coordination failure."
    },
    {
      "end_time": 4587.295,
      "index": 182,
      "start_time": 4559.616,
      "text": " This is why if you have, say, one nation is advancing bioweapons or advancing AI weapons, either AI applied to cyber or applied to drones or applied to autonomous weapons of various kinds. If any country is doing that, it is such an obvious strategic advantage that every other country has to be developing the same types of things plus the whole suite of counters and defenses to those types of things. You could just say, well,"
    },
    {
      "end_time": 4609.701,
      "index": 183,
      "start_time": 4587.739,
      "text": " The world in which everybody has autonomous weapons is such a worse world than this world that we should just all agree not to do it. Except how do I know the other guy is actually keeping the agreement? Well, with the nukes we can tell because we can see if they're mining uranium and enriching it because it takes massive facilities and they're radioactive and stuff like that. But if we're talking about things like"
    },
    {
      "end_time": 4634.138,
      "index": 184,
      "start_time": 4610.026,
      "text": " working with AI systems or synthetic biosystems that don't require a bunch of exotic materials, exotic mining that don't produce radioactive tracers, et cetera. And they can be done in a deep underground military base. How do we know if they're doing it or not? So if we don't know if the other side is doing it or not, then the game theory is you have to assume they are because if you assume they are, you're going to develop it as well."
    },
    {
      "end_time": 4663.985,
      "index": 185,
      "start_time": 4634.684,
      "text": " And then if they do have it and use it, you aren't totally screwed. Whereas the risk on the other assumption that they aren't, if you were wrong, you're totally screwed. So under not having full knowledge, the game theory orients to worst case scenario and being prepared against the worst case. But what that means is all sides assume that of each other. We don't know that the other guys are keeping the agreement. Therefore, we have to race ahead with this thing. And so this is why you're saying when it comes to things like AI,"
    },
    {
      "end_time": 4688.046,
      "index": 186,
      "start_time": 4664.821,
      "text": " do we need something that is not just an FDA thing, but a UN thing? Is this the kind of thing that would require international agreement? And obviously, when there was the question of creating a pause on a six-month pause or whatever, one of the first things people brought up is, won't that let China race ahead? And isn't this a US-China competitiveness issue? And we can see with the CHIPS Act in trying to"
    },
    {
      "end_time": 4713.183,
      "index": 187,
      "start_time": 4688.473,
      "text": " ban ASML downstream type GPUs to China. And we can see with the pressures over Taiwan and TSMC that there is actually a lot of US-China great power play competition related to computation and AI in specific. And so it's a classic situation that if you can't put certain types of"
    },
    {
      "end_time": 4719.019,
      "index": 188,
      "start_time": 4713.49,
      "text": " Control mechanisms and internationally you will probably fail at being able to get them nationally as well"
    },
    {
      "end_time": 4750.06,
      "index": 189,
      "start_time": 4720.06,
      "text": " So about this competition where the tragedy of the commons such that like, well, the competitiveness plus tragedy of the commons accelerates the tragedy of the commons. Why is it not much more simple, religiously simple, ethically simple, where we go back and we say, hey, what I'm going to do is outputting something negative. I don't care that if you do it, you're going to get ahead. I don't care if you're going to eliminate me. I would rather die for your sins rather than contribute my own sins. So the selflessness. Why isn't that sort of ethic? Like we say, we don't want to be Luddites."
    },
    {
      "end_time": 4778.626,
      "index": 190,
      "start_time": 4750.06,
      "text": " Why isn't that a solution? I mean, you're bringing up a great point, which is, can there be a long range thinking about the kind of world we want to live in and a recognition of the kind of beings we have to be, the behaviors we would have to do and not do for that world to come about where we bind ourselves, right? Where we have some kind of, whether the ethics reduces to law, meaning there's a monopoly of violence that backs up the thing or not. Can we at least self-police in some way towards it?"
    },
    {
      "end_time": 4804.94,
      "index": 191,
      "start_time": 4779.224,
      "text": " The long-term answer must involve that, I would argue. Past examples have involved it, but let's talk about where it's limited. One could argue that the Sabbath and the punishments for violating the Sabbath is an example of binding a multipolar trap."
    },
    {
      "end_time": 4829.991,
      "index": 192,
      "start_time": 4805.896,
      "text": " You're not going to work on the Sabbath. And if you do, there's 29 different reasons lay that way. You can be killed for working on the Sabbath. It seems to secular people not thinking about the Chesterton fence deeply. It seems like a ridiculous, wacky religious idea, not grounded in anything with a ridiculous amount of consequence."
    },
    {
      "end_time": 4860.964,
      "index": 193,
      "start_time": 4831.493,
      "text": " Now, your theory of justice is, is it only a personal or is it a collective theory of justice? Right? Some theories of justice are your punishment is not based on just what was right for that one person, but creating an adequate deterrent for the entire population. Because if you don't, what happens? So a classic example is Singapore's drug policy is pretty harsh, right? Life in prison for just possession of drugs. Well, that was following the"
    },
    {
      "end_time": 4883.916,
      "index": 194,
      "start_time": 4861.971,
      "text": " devastating effect that the British had on the Chinese with the opium wars and recognizing how as a kind of population centric warfare, the British were able to influence catastrophic damage on China. They're like, we don't want that here. And we know that there are external forces that will push to do that kind of thing. And it's not just personal choice once there are asymmetric forces trying to"
    },
    {
      "end_time": 4912.551,
      "index": 195,
      "start_time": 4884.514,
      "text": " affect the most vulnerable people in the most vulnerable ways. So we're going to make it to where the deterrent on drug use is so bad, nobody will do it. So if you say that, you actually have to lock somebody up forever for smoking pot, which feels very unfair to them. But you probably only have to do that like a few times before nobody ever will fucking touch it because the deterrent is so bad and they believe it'll be enforced. We're hands off. And if the net effect on the society as a whole is"
    },
    {
      "end_time": 4940.196,
      "index": 196,
      "start_time": 4912.944,
      "text": " that you don't have black markets associated with drugs and gangs and the violence that's associated and you don't have ODs and you don't have the susceptibility to population center of warfare and whatever. They might argue a utilitarian ethical calculus that the harsh punishment was radically less harm to the total situation than not having that. So you have a strong deterrent. I'm not saying that I think that is a"
    },
    {
      "end_time": 4968.541,
      "index": 197,
      "start_time": 4940.759,
      "text": " adequate theory of justice, but it is a theory of justice, right? So let's say that the Sabbath was something like this, and I'm not saying that the rabbis that were creating it at the time thought this, though many people suggested that's probably what they thought. Some very competitive people wanting to get ahead will work every day. They'll work seven days a week, and as a result, they will"
    },
    {
      "end_time": 4996.613,
      "index": 198,
      "start_time": 4969.787,
      "text": " they'll be able to get a little bit more grain farming, whatever, then other people get more surplus, start turning that into compounding benefits. And if anyone does, it'll create a competitive pressure where everyone has to. So nobody spends any time with their family. Nobody spends any time connecting to what binds the culture together, the religious idea, et cetera. So we're going to make a Sabbath where no one is even allowed to work. And there's such a harsh punishment against it that we're binding the multipolar trap, right?"
    },
    {
      "end_time": 5025.247,
      "index": 199,
      "start_time": 4997.108,
      "text": " Because even though it would make sense in that person's rational interest to work that extra day a few times to get ahead, the net effect on the society cumulatively is actually a shittier world. So we're going to bind it because people having that time off to be with their family, each other and studying ethics is a good idea. I would argue that religion has heaps of examples like this of how do we bind our own behavior to be aligned with some ethic. But I would also argue"
    },
    {
      "end_time": 5053.08,
      "index": 200,
      "start_time": 5025.862,
      "text": " Because that was the question you're asking, right? Is there some kind of religious bind to the multipolar trap? And I think the Sabbath is a good example. I think we can also show how well that didn't work for Tibet when China invaded, right? Which is we want to be non-violently oriented. We have a religion that's oriented towards non-violence. And we can see that there were"
    },
    {
      "end_time": 5084.189,
      "index": 201,
      "start_time": 5054.462,
      "text": " If you think about at the time of Genghis Khan or Alexander the Great or whatever, where you have a set of worldviews that doesn't constrain itself in that way, and it's going to go initiate conflict with the people who didn't do anything to initiate it and don't want it. But the worldview that orients itself that way also develops military capability and maximum extraction for the surplus to do that thing. The other worldviews don't make it through, they get wiped out. So there are"
    },
    {
      "end_time": 5108.865,
      "index": 202,
      "start_time": 5084.428,
      "text": " indigenous cultures and matriarchal cultures and whatever that we just don't even have anymore, don't even have the ideas of remnants because it just got wiped out by worrying cultures. And so does that produce the long-term world we want? No, it doesn't either. And so there has been this kind of multipolar trap on that the natural selection, if you want to call it that, of"
    },
    {
      "end_time": 5133.848,
      "index": 203,
      "start_time": 5109.275,
      "text": " Think Verizon, the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store"
    },
    {
      "end_time": 5159.462,
      "index": 204,
      "start_time": 5137.346,
      "text": " Yeah, I don't buy that."
    },
    {
      "end_time": 5179.104,
      "index": 205,
      "start_time": 5159.718,
      "text": " so i'm not saying this as someone who's religious or from a religious perspective well this is a religious perspective sorry but i'm not saying this as someone who's advocating for a certain religion the most dominant religion in the world is christianity and that's the story of someone who had the government against him and he said no i'm not going to fight back in fact if you want to persecute me go ahead"
    },
    {
      "end_time": 5206.834,
      "index": 206,
      "start_time": 5179.104,
      "text": " I will come to you. And one of the most striking stories, literally striking in the Bible to me, is the story of Jesus, the captor, and Peter, his friend, cut off the captor's ear. The guy was going to take Jesus to kill Jesus. And Jesus said, no, no, no, no, don't do that. And took the ear and healed his captor. So think about this, though. Yes, Jesus is the guy who said, let he who has no sins cast the first stone. And they brought Mary Magdalene and all those things."
    },
    {
      "end_time": 5233.575,
      "index": 207,
      "start_time": 5207.483,
      "text": " But we somehow did the Crusades in his name and the Inquisition in his name and the Dark Ages in his name, right? That's some weird-ass mental gymnastics. But the scenes, the versions that were going to stay peaceful and not do Crusades, how many do you see around and how much power did they get? So what happens is you have a bunch of different interpretations, the interpretations that orient themselves to power and to propagation, propagate."
    },
    {
      "end_time": 5258.234,
      "index": 208,
      "start_time": 5234.189,
      "text": " and make it through the interaction between the memes. So memes engage in a kind of competitive selection like genes do, but not individual memes, meme complexes. So if we have a religion that says, be humble, be quiet, listen to people and don't push your ideas on anybody. And then you have another one that says, go out and proselytize, convert everyone to your religion and kill the infidels."
    },
    {
      "end_time": 5280.913,
      "index": 209,
      "start_time": 5258.951,
      "text": " Which one gets more people involved, right? And so the ones that have propagation and that have, um, conflict ideas built right in. So of course then the meme sets evolve over time, right? The, the religious interpretations don't stay the same and the meme sets that end up winning"
    },
    {
      "end_time": 5305.691,
      "index": 210,
      "start_time": 5281.596,
      "text": " through how they reduce themselves to the behaviors that affect war and population growth and governance, et cetera, are all part of it. So the fact that the Dudu said, let he who has no sins among you cast the first stone got to be the religion that became dominant through the Crusades and through violent expansionism and then through radical, torturous oppression is fascinating, right? And it shows you"
    },
    {
      "end_time": 5315.23,
      "index": 211,
      "start_time": 5306.152,
      "text": " that you have like a real philosophy and then you have politics of power and you have fusions of those things that you have to understand both of when you're studying religion."
    },
    {
      "end_time": 5342.739,
      "index": 212,
      "start_time": 5316.015,
      "text": " To me, and I don't mean to harp on this point, but it doesn't have to be a choice between, hey, let me do good and let me not push my views on anyone and proselytizing slash killing. Because you can also proselytize and say your ideas and hopefully people, well, hopefully, maybe there is something in us, maybe there's something cosmically in us, I don't know, that says, hey, you know what? I like that. I don't like that killing. I don't like where that will lead. It resonates with me that the sins get passed down or that the violence gets passed down and amplified."
    },
    {
      "end_time": 5371.254,
      "index": 213,
      "start_time": 5342.739,
      "text": " But i need to be told that so i do need to hear that cuz i can't come up with that on my own so that's why i'm saying the proselytizing is a part of it whether proselytizing is explicit or it's lived and you just see how someone lives and then you inquire hey what are your views and why are you so happy when you have nothing and i'm so miserable and i have everything i just don't see it as a choice between you do good locally and don't tell anyone about it or you can tell people about your ethical system but also oppress them well to make the thought experiment we picked both extremes"
    },
    {
      "end_time": 5393.114,
      "index": 214,
      "start_time": 5371.732,
      "text": " We can see that the Mormons proselytize, but they don't kill everyone who disagrees in the crusading kind of way. They have not expanded as much as the crusades. They've not got as much global dominance or total population as a result, but they have not got nowhere. We can see that"
    },
    {
      "end_time": 5423.217,
      "index": 215,
      "start_time": 5394.155,
      "text": " The ones that say, hey, if someone is interested, we'll share, but we're not going to proselytize because we have a certain humility of how much we don't know and a respect for everyone's choice. The mystery schools stay pretty small. Again, when we were talking about asymmetries, those who are more focused on the opportunity and downplay the risk move ahead, get the investment capital, et cetera. Those who are focused on the risk heavily don't. There's a"
    },
    {
      "end_time": 5448.933,
      "index": 216,
      "start_time": 5423.882,
      "text": " There's a similar thing here, which is like there's an asymmetry in the ideas that hit evolutionary drivers, even if perverse forms, right? Like in the evolutionary environment, it was where there was actual food scarcity, we evolved dopamine-ergic, dopamine-opioid responses to salt, fat and sugar, which were hard to get and useful."
    },
    {
      "end_time": 5470.555,
      "index": 217,
      "start_time": 5449.462,
      "text": " As soon as we got to the point where we could produce lots and lots of salt, fat, and sugar, and there was no scarcity on those things, our genetics didn't change. The fact that it felt really good when you ate that and incentivized you to get more of it, where that little bit of surplus might mean you make it through the famine versus not, it was an adaptive response. Then we create an Anthropocene where we have"
    },
    {
      "end_time": 5497.056,
      "index": 218,
      "start_time": 5471.084,
      "text": " Hostess and McDonald's giving amounts of salt fat sugar that are in the combinations of them with the kind of optimized palatability where it is not only is it not evolutionarily useful to get it anymore, it is actually the primary cause of disease in the environments where that's available. It doesn't mean that the dopaminergic signal changed, right? So we're able to kind of take an evolutionary signal and hijack it"
    },
    {
      "end_time": 5525.094,
      "index": 219,
      "start_time": 5497.688,
      "text": " And this is obviously what fast food does to the evolutionary programs around food. It's what social media does to the impulses for social connectivity. It's what porn does for the impulses to sexual connection associated with intimacy and procreation and all like that, is to extract the hypernormal stimuli from the rest of what makes it actually evolutionarily fit. Same thing can happen with religion, right? You can offer people an artificial sense of certainty"
    },
    {
      "end_time": 5544.65,
      "index": 220,
      "start_time": 5526.049,
      "text": " and offer them an artificial sense of belonging and security and various things like that and without much actual deep philosophic consideration or necessarily even deep numinous experience."
    },
    {
      "end_time": 5569.804,
      "index": 221,
      "start_time": 5544.974,
      "text": " that similarly has the ability to scale more quickly than something where you want people to actually understand deeply, discover things themselves, have integrated experiences, not just do the right action, but for the right intrinsically emerging reasons, which is why, you know, your podcast doesn't have one of their as many views as, um,"
    },
    {
      "end_time": 5600.623,
      "index": 222,
      "start_time": 5572.875,
      "text": " the most trending TikTok videos that require less work and are shorter and more oriented to hypernormal stimuli. So I'm not saying we can't work with these things. I'm saying these are the things we have to work with. So we're in a situation where the, let's say that where in-groups and out-groups would both cooperate and compete at different times based on what game theory seemed to make most sense. And"
    },
    {
      "end_time": 5626.613,
      "index": 223,
      "start_time": 5601.357,
      "text": " They would typically cooperate while reserving the right to compete and to even fully defect if they need to, right? Resource scarcity or something. Or just a sociopath coming into leadership, which totally happens. So the combination of the worldviews, everybody needs to believe our religion. If they don't, they are bad. And so we're going to convert them or whatever, right? Or everyone needs to be"
    },
    {
      "end_time": 5652.585,
      "index": 224,
      "start_time": 5627.073,
      "text": " have a democracy because that's good and all other forms of governance are bad or whatever it is. There's ideology that orients itself. There's a tech stack that is a part of the capacity to do that. There are coordination mechanisms that are a part to do that. So the full stack of the superstructure, the worldviews, the social structure and the infrastructure are what are engaged in in-group, out-group competitions and that are up-regulating, largely shaped by those competitions."
    },
    {
      "end_time": 5678.746,
      "index": 225,
      "start_time": 5653.524,
      "text": " It just happens to be that the version that makes it through that shaping process is also orienting us towards a whole suite of global catastrophic risks. It is basically self terminating. And so it has been the case that you have to win the local arms race because otherwise you lose. But the arms races that are externalizing harm, but on an exponential curve that have cumulative effects,"
    },
    {
      "end_time": 5701.852,
      "index": 226,
      "start_time": 5679.411,
      "text": " you don't actually get to keep externalizing on an exponential curve or running arms races on an exponential curve in a finite space forever. So we're at this interesting space where you can't try to build an alternate world that just loses, but you also can't keep trying to win in the same definition of win."
    },
    {
      "end_time": 5732.312,
      "index": 227,
      "start_time": 5702.585,
      "text": " This is the interesting point we're at, which is we have to actually build a version of win that is not for an in-group in relationship to an out-group, but is something that actually allows some kind of omni-win that gets us out of those multipolar traps. And this was all coming from the topic of you starting with why you brought up the UN and that you have to deal with these things with some kind of sense of how are other people dealing with them and how does that affect the choice-making process?"
    },
    {
      "end_time": 5759.36,
      "index": 228,
      "start_time": 5733.456,
      "text": " Some people would say, look, we're group selected and then we can make our group to be the tribe versus another tribe. And one of the solutions is if there was aliens and then we can bind together as humans and fight something external. It doesn't have to be aliens. The point is that there needs to be something extra. So he's saying there's another option and that that option, the bind together in order to fight some other out group, whether the group is something physical or it could be more abstract, that that's not something that should be pursued. And there's another option."
    },
    {
      "end_time": 5790.828,
      "index": 229,
      "start_time": 5762.312,
      "text": " I didn't say that, but it's an interesting conversation. If we are not binding in-groups to fight out-groups, so this is kind of like Machiavelli's enemy hypothesis, that people are kind of evolutionarily tribal and that to unify a lot of people at a much larger than tribal scale, given that they naturally will find their own differences and conflicts and"
    },
    {
      "end_time": 5819.684,
      "index": 230,
      "start_time": 5792.398,
      "text": " reasons to otherize somebody because they have more influence over their own small group or whatever, to unify them works best if you have a shared enemy that forces them to unify. And so then you're, you eventually, of course, this makes small tribes unified to deal with a larger tribe. And then you get kingdoms and nation states and global economic trading blocks. And eventually you get great superpower conflicts."
    },
    {
      "end_time": 5844.787,
      "index": 231,
      "start_time": 5820.589,
      "text": " and that if the only way to unify, that if groups opposing each other in that way ends up being catastrophic for the world, so we want to get everybody unified in some way, do we need a shared enemy? Obviously, this has been talked about a gazillion times. Can climate change or environmental harm be the shared enemy? Not really."
    },
    {
      "end_time": 5872.91,
      "index": 232,
      "start_time": 5845.213,
      "text": " Even if everyone believed in it, which they don't, it doesn't hit people's agency bias in the same way and whatever. Could we stage a false flag alien invasion to make us unify? Of course, this has actually been an explored topic, both in sci-fi and reality. How deeply explored is a question, but"
    },
    {
      "end_time": 5899.445,
      "index": 233,
      "start_time": 5873.643,
      "text": " Yes, it's a very natural topic to explore that something like a attack from the outside would allow that kind of unification. Because of that, there are people who are very skeptical and concerned of anything that looks like a presented shared threat that should create some unified response because then they're like, well, what is the government"
    },
    {
      "end_time": 5925.213,
      "index": 234,
      "start_time": 5899.838,
      "text": " that will navigate that shared threat and who has any checks and balances on that if that thing becomes captured or corrupt. And so this is again the catastrophes or dystopias. If you don't have some coordination, you get these problems of coordination failure. If your coordination is imposed, you end up getting oppression dynamics. So how do you get coordination that is global but that is emergent, that has"
    },
    {
      "end_time": 5944.974,
      "index": 235,
      "start_time": 5926.169,
      "text": " that keeps local power from doing things that drive multipolar traps, but that also ensures that you don't get centralized power that can be captured or corrupted. A system of coordination has to address both of those things. And as we move into more"
    },
    {
      "end_time": 5974.002,
      "index": 236,
      "start_time": 5945.367,
      "text": " people with more resource consumption per capita and the cumulative tipping points on the biosphere being hit, but even more than that, exponentially more power available to exponentially more actors. Obviously, if we look at the history of how humans have used power and you put an exponential curve on that, it doesn't go well. That's one way of thinking about the coordination issue currently."
    },
    {
      "end_time": 6005.316,
      "index": 237,
      "start_time": 5975.401,
      "text": " When we were thinking about the UN or whatever is this global agency potentially, the phrase, they have no checks and balances comes up. Is there a way of organizing something that is global and influential that has its own internal checks and balances? I don't understand how the US political system works. It's my understanding that it's tripartite and antagonistic. I don't understand the details of it. I'm apolitical, at least consciously. I haven't looked into it, but the point is that's interesting. I don't know how that works. I wonder how much that doesn't work, how much that can be accelerated, amplified."
    },
    {
      "end_time": 6036.186,
      "index": 238,
      "start_time": 6006.357,
      "text": " Well, one point that we bring up is that any proposed system of coordination, governance, whatever, is not going to work the same way after it's been running for a long time as when it was initially developed because all of the systems have a certain kind of institutional decay or entropy built in that has to be considered."
    },
    {
      "end_time": 6063.797,
      "index": 239,
      "start_time": 6036.63,
      "text": " because every vested interest that is being bound has a vested interest in figuring out how to break the control system or capture or corrupt it or something, right? And so it's not just how do we build a system that does that, but it's also how do we build a system that continues to upregulate itself to deal with an increasingly complex, different world than the one it was originally designed for, and that continues to deal with the fact that wherever there is an incentive to game, the system is going to happen."
    },
    {
      "end_time": 6092.875,
      "index": 240,
      "start_time": 6064.753,
      "text": " So you have to not only figure out a system that makes sense currently, but a system that has an adaptive intelligence that is adequate for the changing landscape. So when you look at the U.S., because leaving corrupt monarchy was key to the founding here, and so we're going to try to do this democracy, non-monarchy thing. It was also the result of a change in tech, right? It was a result of the printing press."
    },
    {
      "end_time": 6112.79,
      "index": 241,
      "start_time": 6095.401,
      "text": " where rather than before a printing press and everyone could not have textbooks and couldn't have newspapers and to have access to information, someone had to copy a book by hand, which meant that there were very few of them or copy the information by hand, so only the wealthy could have it."
    },
    {
      "end_time": 6138.439,
      "index": 242,
      "start_time": 6113.2,
      "text": " the idea of a wealthy nobility class that got educated enough to make good choices for everyone else, where if they were too corrupt, the people would overthrow them. So there was a certain kind of checks and balance that kind of maybe made sense, right? With a noblesse oblige built into the obligation of the nobility class to rule wealth. I'm not saying it did, but that's at least the story. But as soon as the printing press comes and now everybody could have textbooks and get educated,"
    },
    {
      "end_time": 6166.118,
      "index": 243,
      "start_time": 6139.087,
      "text": " And everybody could have a newspaper and know what's going on. It kind of debases the idea that you need an ability class to make all the choices because everyone else doesn't know what's really going on. And you say, well, maybe we could all get educated enough to understand how to process information and we could all get news to be able to understand what's going on and all have a say. And so obviously democracy emerged following that change in information tech. I'm saying this because of course,"
    },
    {
      "end_time": 6196.425,
      "index": 244,
      "start_time": 6166.613,
      "text": " The difference in the AI case just briefly is that I don't see the AI is democratizing more so than exacerbating the inequality in terms of like, so if you're extremely bright, the amount of information you can process is going to be far outpacing someone who either is not so bright or gets access to that AI three weeks later."
    },
    {
      "end_time": 6226.34,
      "index": 245,
      "start_time": 6197.637,
      "text": " thinking through in the same way that the printing press had an effect on central religion through everybody can have a Bible and read it and learn on their own and kind of Lutheran revolution and it had an effect on central government in the form of feudalism. We can then look at kind of McLuhan's insights of how information tech changes the nature of the collective intelligence and motivation"
    },
    {
      "end_time": 6256.152,
      "index": 246,
      "start_time": 6228.575,
      "text": " And as a result, the emerging type of society, we can look at the way that the internet and digital have already done that. Looking at the way social media has affected media, for instance, which affects our democratic systems is a pretty obvious one. But then we can look at AI and not just AI, but different types of AI, different ways it could develop. LLM is very different than other kinds of AIs. So we'll come to that in a moment, but let's come back to the other question because you were asking the checks and balances one."
    },
    {
      "end_time": 6277.363,
      "index": 247,
      "start_time": 6257.927,
      "text": " So the idea in the US system was the British system following the Magna Carta and the Treaty of Forest and whatever was supposed to be the most ideal noble thing around and ended up being"
    },
    {
      "end_time": 6308.234,
      "index": 248,
      "start_time": 6278.677,
      "text": " The idea that no matter how you develop a system, it can be corrupted, that was built in. How do we make sure that no part gets too much power and that we have checks and balances throughout was key. Before you even get into the three branches of government, you already have the separation of the state and the church, which was already a key part, and you have the separation of the market and the state, which is"
    },
    {
      "end_time": 6332.295,
      "index": 249,
      "start_time": 6308.66,
      "text": " You have a liberal democracy that is proposed, so you don't have a pure market function, but you also don't have that the state is running the entire economy. And so the separation of the market, the state, the church, there's a few other ways of thinking about separation was already"
    },
    {
      "end_time": 6362.022,
      "index": 250,
      "start_time": 6332.483,
      "text": " a part of it. And then with regard to the state's function, the separation of the legislative, the judicial, and the executive were critical. And then within each of those, within the legislative, a bicameral breakdown was really important. And then the 10th Amendment was to push as much power, the subsidiary principle, to the states as possible and as little to the federal. So there were many, many steps of checks and balances on concentrated power that were built into the system."
    },
    {
      "end_time": 6390.282,
      "index": 251,
      "start_time": 6362.568,
      "text": " But of course, everyone who is smart, who is also agentic, who wants more power, looks for loopholes and or figures out how to write laws and to get them passed, right? Doing legislation and lobbying. And of course, corporations can pay for a lot more lawyers than an average citizen can or than a nonprofit group that doesn't have a revenue stream associated. So the group that is trying to turn"
    },
    {
      "end_time": 6417.824,
      "index": 252,
      "start_time": 6390.862,
      "text": " a commons into commodities versus one that's trying to protect the commons will inherently have a bigger revenue stream to employ media to change everyone's mind or to employ campaign budgets or to employ lobbyists or whatever. So you end up seeing that there is a progressive kind of loophole finding corruption because the underlying incentive systems, invested interests are still there, right?"
    },
    {
      "end_time": 6444.906,
      "index": 253,
      "start_time": 6418.865,
      "text": " Baudrillard simulation and simulacra that discusses the steps of the degradation from a new system to how it eventually devolves into mostly a simulation of what it originally was is a good analysis on this we could discuss. So that's a little bit on kind of the history of checks and balances on power, but I don't think anybody looks at our current US system and says it's doing a great job of that."
    },
    {
      "end_time": 6475.282,
      "index": 254,
      "start_time": 6445.794,
      "text": " And there's a bunch of reasons, in addition to the one that I said about how there is a natural process of figuring out how to influence this. There's one other part that's actually worth saying. So you have a state, you have a market, and you have the people as members of a democratic"
    },
    {
      "end_time": 6505.759,
      "index": 255,
      "start_time": 6476.732,
      "text": " government, meaning their function in state, not their function in market. So government of, for, and by the people, the people might not all be representatives, but they can all speak to their representative, decide how it votes, those types of things, right? So there's supposed to be a check and balance between these three, that the main reason that there is law is to prevent some people or groups of people from doing things that they have an incentive to do that would suck for everybody else."
    },
    {
      "end_time": 6537.346,
      "index": 256,
      "start_time": 6508.029,
      "text": " Obviously, whether it's individual stealing or murder or whatever, or it's a corporation cutting down the national forests or polluting the waterways too much, somebody has an incentive to do something. In a democracy where the idea is supposed to be that we all want and value different things, but the collective will of the people as determined through some voting process gets instantiated into law."
    },
    {
      "end_time": 6568.097,
      "index": 257,
      "start_time": 6538.541,
      "text": " of violence can back that up. That's kind of core to the idea of a liberal democracy, right? I'm not arguing that it is a good system, but I'm arguing for the core logic of it. And it's because the recognition that if we just had a pure market system, the reason why there wasn't just a pure kind of laissez-faire system, even though the people building this understood, at least their expressed reason, is an impure type market dynamic, as you were mentioning with AI, some people are way better at than other people."
    },
    {
      "end_time": 6598.439,
      "index": 258,
      "start_time": 6568.49,
      "text": " And as a result, we'll just end up getting a lot more money that they can convert to more land, resources, employees, et cetera. And you end up getting a power law distribution on wealth, which is a power law distribution on everything. And these people's interests end up determining the whole society and these people's interests are pretty determined for them. And so if you want to create protections for these people at all, and that was basically the King George situation and the inspiration for the Declaration of Independence and leaving, which was there was too much concentrated power and it was kind of fucked. So how do we make that not happen?"
    },
    {
      "end_time": 6616.084,
      "index": 259,
      "start_time": 6599.053,
      "text": " Well, since we know that the market is going to kind of naturally do that, let's create a state that is more powerful than any market actor. And let's make sure that the state reflects the values of all the people. So the little guys get to unify themselves through a vote, right?"
    },
    {
      "end_time": 6638.063,
      "index": 260,
      "start_time": 6616.561,
      "text": " And then you get to have a representative that represents everybody. It's the only one given a monopoly of violence and it gets to make sure that any more powerful actors are checked. That's kind of the idea. Yeah. So, so far this is an account of how it's been like a history lesson, but you aren't saying this is how it should continue to be, nor this is how it's operating in its ideal sense currently. Are you just saying that this was the reasoning behind it?"
    },
    {
      "end_time": 6667.039,
      "index": 261,
      "start_time": 6638.439,
      "text": " one key part of how it broke down. The idea is that the market, people will have incentives to do things that are good for them, that might suck for the environment or others. Others have the ability to agree upon laws that will bind those actors to not do that thing. The state is supposed to check the market, let the market do its thing, do resource distribution, productivity, let it do that because it's good, but check the particularly fucked applications."
    },
    {
      "end_time": 6686.886,
      "index": 262,
      "start_time": 6667.995,
      "text": " In order for the state to check the market, the people are supposed to check the state and ensure that the state is actually doing the thing that it's supposed to do and that the representatives aren't corrupt and taking back in deals and all those kinds of things. And then there's a way in which the market kind of checks the people, meaning that the people can't"
    },
    {
      "end_time": 6714.514,
      "index": 263,
      "start_time": 6687.585,
      "text": " The accounting checks them. They can't vote themselves more rights than they're willing to take responsibility for. They can't make the economics of the whole situation not work. They can't vote themselves a bunch. If the people all say, yes, we should all get no taxes but lots of social services, then the accounting is what actually checks the people."
    },
    {
      "end_time": 6743.916,
      "index": 264,
      "start_time": 6715.077,
      "text": " So that's the idea of how you have this kind of self-stabilizing thing. But of course, the people stopped checking the market once we were out of kind of the sense of an eminent need for revolution, then the people have a lot of shit to do other than really pay attention to government in detail. And there's a bunch of other reasons beyond the scope of this conversation why the people stopped checking the government, in which case the market is continuously trying to influence the government through lobbying and legislation and campaign finance and all those other things."
    },
    {
      "end_time": 6769.241,
      "index": 265,
      "start_time": 6744.633,
      "text": " and so then you end up getting regulatory capture rather than regulatory effectiveness. So when you put those checks and balances, it's going to change. When everyone's scared of concentrated power following a revolution, it's different than four generations later where nobody actually feels that fear anymore and is busy doing other shit. So it's not just how do you build your system, it's how do you build a system"
    },
    {
      "end_time": 6795.981,
      "index": 266,
      "start_time": 6769.838,
      "text": " where the initial people that went through the difficult thing to build it when they die, you didn't just pass on the system, but the generator function of the kinds of insights needed to keep updating and evolving the system under an evolving context. So when you ask the question about could such a thing be built at an international level where there are checks and balances, the answer is it's super hard. But yes,"
    },
    {
      "end_time": 6818.029,
      "index": 267,
      "start_time": 6796.903,
      "text": " But it's not just can you design it properly upfront, it's also can you factor how that system then even if well intended at first, it's kind of like all technologies dual use. So you build the gene editing for, you know, oncology, but then it can be used for bioweapons. You have to not just think about what you're building it for, but all the things that will happen having created"
    },
    {
      "end_time": 6832.346,
      "index": 268,
      "start_time": 6820.555,
      "text": " you're building it for right now, but as the landscape changes, culture changes, can this thing be corrupted? Can it be captured in future different contexts? And how do you build in immune systems to that?"
    },
    {
      "end_time": 6859.07,
      "index": 269,
      "start_time": 6833.148,
      "text": " And that sort of thinking seems to be missing with the development of AI. And it reminds me, I've said this several times, like the development of the bomb where Feynman and Oppenheimer, mainly Feynman and his peers, said they didn't think about what they were creating. They were thinking, we're having fun speaking about these topics. It's even more fun to do research on these topics. Einstein said, like, I would burn my hands had I known that this was what was going to be developed. I wasn't thinking about that. I wasn't thinking about the consequences. And Feynman said something similar."
    },
    {
      "end_time": 6885.691,
      "index": 270,
      "start_time": 6859.07,
      "text": " we're consumed with the achieving of a goal and we're not thinking about what would occur as a consequence once we attain it and you hear this constantly in the ai scene channels like two minute papers that say what a time to be alive that's like his catchphrase what a time to be alive like encouraging and amazed constantly thinking oh what is going to be like two papers down the line said enthusiastically i see little caution expressed yes geez louise like what the heck are we building and should we just because we could should we"
    },
    {
      "end_time": 6913.951,
      "index": 271,
      "start_time": 6886.63,
      "text": " When the people who express caution, this now relates to this asymmetry we said, if people are like, hey, this is extremely risky technology, we need to understand the risk space very deeply first. We need to ensure that the development of the technology and then its future use by everybody is safe enough to be worth built."
    },
    {
      "end_time": 6938.439,
      "index": 272,
      "start_time": 6915.265,
      "text": " Those people end up running nonprofits because there's no upside to that. There's no immediate capital upside to that. So they have a hard time getting the capital to get really good researchers or big enough computers and data sets to try to run stuff on for trials. And the people that are like, oh, there's a market application to this have a much easier time getting a massive GPU cluster and a lot of talent and a lot of data."
    },
    {
      "end_time": 6958.848,
      "index": 273,
      "start_time": 6938.746,
      "text": " and so we can see this if you name the names that are out there in their views and then map them to the types of organizations they run and the type of motivational or cognitive bias it somewhat maps. So what is"
    },
    {
      "end_time": 6985.776,
      "index": 274,
      "start_time": 6960.725,
      "text": " This is what we intended to talk about. All of this has been interesting preface. What is the actual risk space with AI? What do we know? What do we not know? How should we think about it? How should we proceed, especially given that AI is a lot of different things? Should we dive in there now? Sure. I just wanted to point out that although this seems like a disagreement between you and I on the surface, there's an agreement. So again, I'm not a Mormon."
    },
    {
      "end_time": 7005.145,
      "index": 275,
      "start_time": 6986.988,
      "text": " But I don't see the Mormons failure because they go and they say, hey, you should whatever they say, act right, be humble, be kind, don't overconsume and so on. But then their religion doesn't grow. I don't see that as a failure of them, because maybe their religion is larger than just being Mormon. It's something about the values that are spread and then they"
    },
    {
      "end_time": 7030.179,
      "index": 276,
      "start_time": 7005.145,
      "text": " send out these values and they filter through the community in the same way that these nonprofits just because they're not the largest doesn't mean that the values that they send out don't influence you and I and influence the people who are listening and then act differently because of these values. We have no idea how much the passivism of Tolstoy has influenced you hugging your father and your brother and the positive sentiment we have generally speaking in society toward decentralization."
    },
    {
      "end_time": 7050.589,
      "index": 277,
      "start_time": 7030.179,
      "text": " And it also reminds me of the Cassandras for people who don't know what a Cassandra is. It's someone who makes a prediction of it's a doomsayer. It's akin to a doomsayer. They're the opposite of a self-fulfilling belief. So self-fulfilling belief is one where you state it and you create the condition such that it becomes true. Whereas in the best case for the people who say the world is going to end well,"
    },
    {
      "end_time": 7078.097,
      "index": 278,
      "start_time": 7050.589,
      "text": " Their success depends on them being self-sacrificing, depends on us then being able to repudiate them, let the world hasn't ended. We have no idea how much the doomsayers or the fear-mongers said something that made us straighten up and act right, just enough that it pushed us off of the brink and influenced us to make society live and so we then have this archetype of the cautious and contumacious false cravens of the past. It's just not clear that because they have died doesn't mean that they're unsuccessful. That's what I mean."
    },
    {
      "end_time": 7101.442,
      "index": 279,
      "start_time": 7078.66,
      "text": " Yeah, so there's a simple viewpoint. The idea that even though Greece didn't continue its empire relative to Rome, that its memes ended up influencing the whole Roman Empire. And so in some way,"
    },
    {
      "end_time": 7128.063,
      "index": 280,
      "start_time": 7102.022,
      "text": " it one, or similar with Judaic ideas or whatever. One of the greatest examples of that that people talk about right now is Tibetan Buddhism. From the point of view of Tibet as a nation, the Tibetan people, the integrity of that wisdom tradition, it was radically destroyed."
    },
    {
      "end_time": 7154.804,
      "index": 281,
      "start_time": 7128.524,
      "text": " But in the process of the world seeing that and having some empathy engendered for it, even though it didn't protect Tibet, did that actually disseminate Buddhist ideals to the whole world radically faster? There's a similar conversation as so many people become interested in ayahuasca and plant medicines from indigenous cultures that the economic pressure of that is making"
    },
    {
      "end_time": 7184.258,
      "index": 282,
      "start_time": 7155.06,
      "text": " What is remnant of those cultures get turned into tourism and ayahuasca production or shipibo production or whatever it is that on one hand it actually looks like a destructive act on those cultures and the other way it's are the mimetics of those now becoming dispersed throughout the dominant systems. That is a part of the consideration set that has to be considered. And now do we see, for instance, that"
    },
    {
      "end_time": 7202.381,
      "index": 283,
      "start_time": 7184.923,
      "text": " particularly nonviolent groups, like say the Janes as an example of maximum nonviolence, that those memes do become decentralized and affect everybody."
    },
    {
      "end_time": 7230.708,
      "index": 284,
      "start_time": 7203.114,
      "text": " It's not a simple yes or no, right? There's a whole bunch of contextual applications. So when we look at, are there memes from Buddhism that have influenced the world? Yes. But are they the ones that are compatible with the motivation set of the world they influence? So you've got this like Buddhist techniques of mindfulness for capitalists in Silicon Valley to crush at a capitalism is a very weird version of the subset of the Buddhist stack, right?"
    },
    {
      "end_time": 7260.555,
      "index": 285,
      "start_time": 7231.425,
      "text": " I remember when I first started seeing the popularization of mindfulness techniques in business so that people could focus better and crush into capitalism, how fucking hilarious that thing is, right? Because in some ways, you are distributing good ideas. In other ways, you're actually extracting from a whole cultural set the part that ends up being a service ingredient to another set. The topic of"
    },
    {
      "end_time": 7285.776,
      "index": 286,
      "start_time": 7261.254,
      "text": " What makes it through? I mean, it's a complex topic. What I will say is the idea that a civilization, which is its superstructure, its worldview values, what is true, good, beautiful, what it's oriented to, what's the good life, its social structure, meaning its political economy and institutions, its"
    },
    {
      "end_time": 7313.848,
      "index": 287,
      "start_time": 7286.118,
      "text": " formal incentives and deterrents and how it organizes collective agreement and its infrastructure, the physical tech stack that it mediates all this on together in a civilizational stack. One of those competing with other ones for scarce resource and dominance and all of them engaged in that particular competitive thing, I would argue no one actually gets to win that."
    },
    {
      "end_time": 7337.21,
      "index": 288,
      "start_time": 7314.121,
      "text": " That process of the competition of those relative to each other does actually create an overall global civilizational topology that self-terminates. But also no one trying to create a good long-term future can just lose at that game in the short term. So you can neither create the world you want by just losing at that game, nor by trying"
    },
    {
      "end_time": 7352.125,
      "index": 289,
      "start_time": 7339.241,
      "text": " actually"
    },
    {
      "end_time": 7381.732,
      "index": 290,
      "start_time": 7354.241,
      "text": " Boy, he wants to go out with this other girl who happens to have the same name as his mom. But anyway, she's like, I don't want to go out with you because you're just in love with your mom. Then he's like, No, and he just left his mom's home. He's like 40 years old or 35. He's like, No, no, no, it's the opposite. I'm leaving my mom for you. You're replacing my mom. And then she's like, No. And it's because he's still framing it in the same way. Anyway, we have to abandon the frame. So please, what does that look like and integrate AI into this answer?"
    },
    {
      "end_time": 7412.039,
      "index": 291,
      "start_time": 7385.333,
      "text": " So we have not just tried to give the frame on AI risk yet and to incorporate that in the what is the long-term solution for civilization as a whole look like, let's actually just kind of do the AI risk part first and then we can bring back together. Let's try to frame how to think about"
    },
    {
      "end_time": 7441.425,
      "index": 292,
      "start_time": 7413.78,
      "text": " AI risks, AI opportunities and potentials, including how AI can help solve other risks, which has to be factored. I will add as preface that I am not an AI developer. I don't have background in that. I'm not even an AI risk expert or specialist. I know you're in conversation."
    },
    {
      "end_time": 7467.756,
      "index": 293,
      "start_time": 7441.852,
      "text": " might have Yudkowsky and other people who really are Stuart Russell, Bostrom, all those guys would be great. Because of some maybe novel perspectives about thinking about risk and governance approaches to risk writ large, meta-crisis, that's the perspective that I'm taking into the AI topic."
    },
    {
      "end_time": 7497.688,
      "index": 294,
      "start_time": 7468.66,
      "text": " What is unique about AI risk relative to other risks? We were talking earlier about environmental risks and risks associated with large scale war and breakdown of human systems and synthetic bio and other things. If we look at other technologies that have the potential to do some catastrophic things like nuclear, it's very easy to see that nuclear weapons don't make better bio weapons."
    },
    {
      "end_time": 7525.759,
      "index": 295,
      "start_time": 7498.319,
      "text": " They don't make better cyber weapons. They don't even make better nuclear weapons directly. The same is true for biotechnology. It doesn't automatically make those other things. AI is pretty unique in that you can use AI to evolve the state of the art in nuclear technology, in delivery technology, in intelligence technology, in bio and cyber and literally all of them."
    },
    {
      "end_time": 7555.947,
      "index": 296,
      "start_time": 7526.067,
      "text": " It is unique in its omnipurpose potential in that way, because of course, all those other technologies were developed by human intelligence, human intelligence, agency, creativity, some unique faculties of human cognitive process. And so where all the other technologies are kind of the result of that human process, building a technology that is doing that human process, possibly much faster and on much more scale is obviously a unique kind of case, right?"
    },
    {
      "end_time": 7575.538,
      "index": 297,
      "start_time": 7557.005,
      "text": " And so there's thinking about what type of risk does an AI system create on its own? But then there's thinking about how do AI systems affect all other categories of risk? We have to think about both of those."
    },
    {
      "end_time": 7604.684,
      "index": 298,
      "start_time": 7576.186,
      "text": " And then, in addition to the fact that the nukes don't automatically make better bioweapons, the nukes don't even automatically make more nukes, right? They're not pattern replicating. But to the degree that we actually get AI systems that not only can make all the other things better, but they can make better AI systems, and to the degree that there starts to be something like autonomy in that process, then the self-upgrading and omnipotential of all the other things"
    },
    {
      "end_time": 7630.35,
      "index": 299,
      "start_time": 7605.23,
      "text": " It's also true that there's an exponential curve in the development of hardware that AI runs on, better GPUs and all of different kinds of computational capabilities. There's an exponential curve in IoT systems for capturing more data to train them on, exponentially more people and money going into the field."
    },
    {
      "end_time": 7658.933,
      "index": 300,
      "start_time": 7630.93,
      "text": " because of the way that shared knowledge systems work, the kind of exponential development in the software and cognitive architectures. So we're looking at the intersection of multiple exponential curves, not just a single one. That is also kind of important and unique to understand about the space. So thinking about the case of AI turning into AGI, an autonomous artificial intelligence system,"
    },
    {
      "end_time": 7677.927,
      "index": 301,
      "start_time": 7659.377,
      "text": " We can no longer pull the plug on that has goals, has objective functions, whatever they happen to be. That is something that guys like Bostrom and Udkowski have done a very good job of describing why that's a very risky thing. I think everybody at this point probably has a decent sense of it, but just make it very quick."
    },
    {
      "end_time": 7699.428,
      "index": 302,
      "start_time": 7679.104,
      "text": " When we say a narrow AI system, we mean something that is trained to be good at a very specific domain, like beating people at chess or beating them at Go or being able to summarize a large body of text. When we say general intelligence, we mean something that could maybe do all of those things."
    },
    {
      "end_time": 7725.179,
      "index": 303,
      "start_time": 7699.991,
      "text": " and can figure out how to be better than humans at new domains it has not been trained on through some kind of abstraction or lateral application of what it already knows. So if you put us into an environment where we have to figure out what is even the adaptive thing to do, we will do it. There's a certain kind of general intelligence that we have. So when we talk about a generally intelligent, artificial intelligence,"
    },
    {
      "end_time": 7750.776,
      "index": 304,
      "start_time": 7725.759,
      "text": " And then, of course, because we can develop AI systems, one of the things it could do is develop AI systems. So if it has more cognitive capability than us in some ways, it can develop a better AI system and that one could recursively develop a better one. And you get this kind of thought about recursive takeoff in the power of an AI system. And there are conversations about whether that would be slow or fast."
    },
    {
      "end_time": 7778.899,
      "index": 305,
      "start_time": 7751.408,
      "text": " Is there an upper boundary on how intelligent a system could be and humans are near the top of that? Or are we barely scratching at the beginning of it and we could have something millions of times smarter than us? So that's all kind of part of that conversation. But the idea that we could create a artificial intelligence that could basically beat us at all games, which it could think about economy and affecting public opinion,"
    },
    {
      "end_time": 7806.101,
      "index": 306,
      "start_time": 7779.48,
      "text": " and military as games and it has faster feedback loops, faster go to loops to get better than we do. So if we're like trying to deal with it, it's going to win at newly defined games. And if that thing, we can't pull the plug on and it can anticipate our movements and beat us at all games. If it has goals that are directly antithetical to ours or not even directly antithetical, but"
    },
    {
      "end_time": 7834.582,
      "index": 307,
      "start_time": 7806.698,
      "text": " the way in which it fulfills its goals might involve externalizing harm to things that are part of our goal set, that's bad for us, right? So the idea of don't let that thing happen prevents getting to an unaligned AGI, that's that particular category of risk. And so there are arguments around could an AGI like that, is it even possible? That's one question. If it is possible to have such a thing,"
    },
    {
      "end_time": 7863.78,
      "index": 308,
      "start_time": 7835.316,
      "text": " Is it possible to align it with human interests? What would that take? If it is possible to align it, is it possible to know ahead of time that the system you have will be aligned and will stay aligned? Those are all some of the questions in the space. Then do our current trajectories of AI research, like transformer tech or just neural networks or deep learning in general, do these converge on general intelligence?"
    },
    {
      "end_time": 7884.445,
      "index": 309,
      "start_time": 7864.241,
      "text": " If so, in what time period? Those are all some of the questions regarding the AGI risk space. Now, I want to talk about that risk, but I want to talk about other risks using that as an example in the space. Any questions or thoughts on that one to begin with?"
    },
    {
      "end_time": 7912.517,
      "index": 310,
      "start_time": 7885.043,
      "text": " Sure. Number one is that we may already have artificial intelligence in a baby form. Like we have hugging face. I don't know if you know what that is. And then there's the paper of sparks of artificial general intelligence. And that's distinguished from something that updates itself. I just want to make that clear. So there's a lot of questions regarding does it have to be better than humans at everything to be an existential risk? We could imagine we could imagine a"
    },
    {
      "end_time": 7943.166,
      "index": 311,
      "start_time": 7915.452,
      "text": " Von Neumann machine that was self-replicating and self-evolving that was not better at everything, but better at turning what was around it into more of itself and evolving its ability to do so and just having way faster feedback loops. And we could imagine that becoming an existential risk with a speed of a particular type of intelligence that does not mean better than us at everything. Yeah, that's a great point. Like an asteroid is not better than us at almost anything, but it can destroy us. And it"
    },
    {
      "end_time": 7968.729,
      "index": 312,
      "start_time": 7943.78,
      "text": " Yeah, it's not doing it through some kind of process that involves learning or navigating competitions at all. It's just kinetic impact. This would be a kind of intelligence, but it could be one that's a lot more like a very bad pandemic and the intelligence of a pathogen than the intelligence of a god. So talking about if the"
    },
    {
      "end_time": 7995.589,
      "index": 313,
      "start_time": 7969.445,
      "text": " What type of intelligence it would need? Is it generally intelligent? Is it autonomous? Is it agentic? Is it self-upgrading? They're related concepts, but they're not identical concepts. Let's go ahead and put the category of AGI risk as one topic in the AI risk space."
    },
    {
      "end_time": 8023.302,
      "index": 314,
      "start_time": 7997.227,
      "text": " Let's come to a much nearer term set of things, which is AI empowering bad actors. And we can talk about what of that is possible with the existing technology. What of that is possible with impending technology that we're for sure going to get on the current course versus things where we don't know how long it's going to take or even if we'll get there."
    },
    {
      "end_time": 8039.787,
      "index": 315,
      "start_time": 8026.578,
      "text": " So with regard to AI empowering bad actors, we could say how undefined the bad actor is, obviously, because one person's freedom fighter is another person's terrorist."
    },
    {
      "end_time": 8057.176,
      "index": 316,
      "start_time": 8040.384,
      "text": " So we can imagine someone who is terrified about environmental collapse deciding to become an eco terrorist being a maximally good actor in their worldview, but saying that the only answer is to start taking out massive chunks of the civilizational system that's destroying the environment."
    },
    {
      "end_time": 8087.654,
      "index": 317,
      "start_time": 8059.121,
      "text": " So I'm simply saying that I'm not being simplistic about what we mean by bad actor, but oriented to from whatever motivational type, whether it was pure sadism, whether it's nihilistic, burn it all down, or whether it's well motivated, but maybe misguided considerations. But AI for some destructive purpose. So now, this is something we have to address first."
    },
    {
      "end_time": 8115.401,
      "index": 318,
      "start_time": 8089.684,
      "text": " One thing I have found when people think about how significant AI risk will be and how significant AI upside will be. First on AI upside, it's just important because if we talk about risk and we don't talk about upside, it will be easy for a lot of people to say, oh, this is a techno pessimist Luddite perspective and kind of dismiss it at that. So I would like to say there is a"
    },
    {
      "end_time": 8136.408,
      "index": 319,
      "start_time": 8118.063,
      "text": " The upsides of AI, the best case examples that everyone is interested in, everyone is interested in, they're awesome. All the things that we care about, that we use intelligence to figure out, where intelligence is rate limiting, figure them out, rate limiting to figure out the rest of the problems, could we use it to solve those problems?"
    },
    {
      "end_time": 8162.261,
      "index": 320,
      "start_time": 8137.21,
      "text": " So could AI make major breakthroughs in cancer and immuno-oncology? And does anyone who's talking about slowing down AI, are they factoring all the kids that are dying of cancer right now? And if we could speed that thing up, could we affect that in our, like, that's a very personal, very real thing, right? So AI applied to curing all kinds of diseases and"
    },
    {
      "end_time": 8191.817,
      "index": 321,
      "start_time": 8163.285,
      "text": " and AI applied to psychiatric diseases and scientific breakthroughs and maybe resource optimization issues that help the environment and maybe the ability to help with coordination challenges if applied in certain ways. The positive applications, the kind of customized AI tutoring that could provide Marcus Aurelius level education where the best tutors of all of Rome were personally tutoring him in every topic,"
    },
    {
      "end_time": 8218.541,
      "index": 322,
      "start_time": 8192.398,
      "text": " could provide something better than that to every human, right? Could democratize aristocratic tutoring. There was Eric Hohl's, our essays on aristocratic tutoring are really good. You should bring them on here, but basically it was something many people have come to, which is that the great polymaths and super geniuses, the highest statistical correlator that pops out is that they all had something like aristocratic tutoring when they were young, or the vast majority of them."
    },
    {
      "end_time": 8242.892,
      "index": 323,
      "start_time": 8219.019,
      "text": " that even von Neumann and Einstein had mathematicians as governesses before they went to school. Terry Tau had Paul Erdos. There's this famous image of, I don't know who had Ed Whitten though. Many of the people simply had parents that were very actively involved, scientists, philosophers, thinkers."
    },
    {
      "end_time": 8270.077,
      "index": 324,
      "start_time": 8244.872,
      "text": " But if you think about why Marcus Aurelius dedicated the whole first chapter of meditations to his tutors, and if you think about how the Dalai Lama was conditioned, where you find this three-old boy and have the top lamas and all of Tibet tutor them on everything that is the whole canon of knowledge, of course, if that was applied to everybody, we'd have a very different world. This is a very interesting insight because it says that the upper boundaries on"
    },
    {
      "end_time": 8296.971,
      "index": 325,
      "start_time": 8270.418,
      "text": " that a lot of what we call human nature, because it's ubiquitous, it's not nature, it's nature through ubiquitous conditioning, that the edge cases on human behavior show conditioning in common, and that if you could make that kind of conditioning ubiquitous, you would actually change the human condition pretty profoundly. But as we move from feudalism to democracy and wanted to kind of get rid of all the dreadful unequal aspects of feudalism, looking at the fact that"
    },
    {
      "end_time": 8322.449,
      "index": 326,
      "start_time": 8297.415,
      "text": " Like you can't learn to be a world-class mathematician by a person who's not a world-class mathematician the same way you can by one who is and you can't you don't get a bunch of world-class mathematicians becoming third grade or eighth grade high school teachers you know or school teachers so how would you do that so it's kind of repugnant from a privileged lack of democratized capability point of view right and yet could you have could I make"
    },
    {
      "end_time": 8352.21,
      "index": 327,
      "start_time": 8322.995,
      "text": " LLM-trained AIs and better than LLM ones, where I can have von Neumann and Einstein and Gödel all in a conversation with me about formal logic, where they are not only representing their ideas, but maybe even now have access to all the ideas since then, and are pedagogically regulating themselves to my learning style. That's kind of amazing, right? And could they maybe be doing that based on also"
    },
    {
      "end_time": 8377.892,
      "index": 328,
      "start_time": 8352.807,
      "text": " psychological development theories. A colleague of mine, Zach Stein, has been working on this a lot of how to be evolving not just their cognitive capacity, but their psychosocial, moral, aesthetic, ethical, et cetera, full suite of human capacities. So I'm simply saying AI applied rightly. There's a lot of things to be excited and optimistic about in"
    },
    {
      "end_time": 8407.961,
      "index": 329,
      "start_time": 8379.155,
      "text": " So that's a given and we could do a whole long conversation on more of those examples. When I look at how people orient to the topic of AI risk, one of the things that seems to be a common kind of where their knee-jerk reactions before understanding all the arguments pro and con well comes"
    },
    {
      "end_time": 8436.869,
      "index": 330,
      "start_time": 8408.439,
      "text": " is how much they have a bias towards a kind of techno-optimism or techno-pessimism, kind of a where Pinker, Hans Rosling, there are still problems, but they're getting better. The world is getting progressively better and it's a result of things like capitalism and technology and science and progress. And so more of that will just keep equaling better. And yes, there will be problems, but they're worth it, right? Versus, so that's"
    },
    {
      "end_time": 8467.159,
      "index": 331,
      "start_time": 8437.5,
      "text": " I would call that techno-capital optimism, but the naive version that doesn't look at the cost of that thing, we would call a naive progress dialectic. In the dialectic of progress is good, progress is not good, or we're really making progress versus we're actually losing critical things or causing harm or whatever. In that dialectic, that's on the progress side, but a naive version of it."
    },
    {
      "end_time": 8498.985,
      "index": 332,
      "start_time": 8469.087,
      "text": " And so most of the, and just to address that briefly, so the naive progress story looks at all the things that have gotten better. And you can see this in lots of good books and Pinker's books, Diamandis's books, Rosling's talks, on and on. And then the extension of that into the future Diamandis start to do, and you could say Kurzweil is kind of an extension of that far out."
    },
    {
      "end_time": 8510.725,
      "index": 333,
      "start_time": 8499.514,
      "text": " Why naive is if it doesn't look at what is lost in that process and what is harmed in that process, as well as the increase in the types of risk that are happening."
    },
    {
      "end_time": 8535.947,
      "index": 334,
      "start_time": 8512.125,
      "text": " And so I would argue that most of those things in every kind of Rosalink presentation is cherry picking its data out of a humongous set to make a cherry picked argument. This is one of the reasons that fact checking is not enough is because you can cherry pick your facts. You can frame them in a particular way and create a conclusion that the totality of knowledge wouldn't support at all because of that process, right?"
    },
    {
      "end_time": 8561.22,
      "index": 335,
      "start_time": 8536.681,
      "text": " I would say there is a naive technopessimism or Luddite direction that looks at the real harms tech causes culturally, socially, environmentally, or other things, and wants back to the land movement and organic, natural, traditional, whatever, various types of, and if it is not paying attention to the types of benefit that are legitimate,"
    },
    {
      "end_time": 8587.449,
      "index": 336,
      "start_time": 8561.561,
      "text": " That's naive, but also if it's not paying attention to the fact that that worldview will simply, as we talked about before, not forward itself, because the one that advances more tech will develop more power and end up becoming the dominant world system. That also means that it's not actually having a worldview that can't orient towards shaping the world. So we have to, so putting those together, I would say"
    },
    {
      "end_time": 8614.582,
      "index": 337,
      "start_time": 8587.995,
      "text": " All of the things that the techno-optimists say tech has made better and all of us like the world we're going to the dentist involves Novocaine versus not Novocaine and where we have painkillers and where we have antibiotics under infection and stuff like that. All of the things that tech has made better have not come for free. There have been externalized costs and the cumulative effect of all of those costs is really, really significant."
    },
    {
      "end_time": 8640.009,
      "index": 338,
      "start_time": 8615.606,
      "text": " And so if you look at the progress narrative, the indigenous people that were genocided don't see it as a progress narrative. The fact that there's more biomass of animals in factory farms than there is in the wild today does not see that as a sign of progress. The animals that live in factory farms or all the species that are extinct don't see it as progress."
    },
    {
      "end_time": 8670.282,
      "index": 339,
      "start_time": 8640.503,
      "text": " the fact that we have many, many different possibilities of destroying the life support capacity of the planet relative to any previous time, or that almost no teen girl growing up in the industrialized world doesn't have body dysmorphia, where that was not an ubiquitous thing. There's a lot of things where you can say, damn, those technologies upregulated some things and externalize costs somewhere else. If you factor the totality of that, then you"
    },
    {
      "end_time": 8699.684,
      "index": 340,
      "start_time": 8671.323,
      "text": " can say okay there are a lot of positive examples any new type of tech can have but there's also a lot of externalities and harms it can have and we want to see how to get more of the upsides with less of the downsides and that can't be a rush forward process right that actually requires a lot of thinking about how to do that so i'm actually i actually am a techno optimist in a way meaning i i do see a future that"
    },
    {
      "end_time": 8729.718,
      "index": 341,
      "start_time": 8700.026,
      "text": " is high nature stewardship, high touch, and is also high tech. High touch? Yeah, meaning that the tech does not move us into being disembodied heads mediating exclusively through a digital world. So I would argue that"
    },
    {
      "end_time": 8758.746,
      "index": 342,
      "start_time": 8730.418,
      "text": " your online relationships don't do everything that offline relationships do. They do some additional things like distance and network dynamics, whatever. But if they debase your embodied relationships, they're causing a harm. If the online relationships improve your embodied relationships, not just the create online relationships and debase them, then that's a different thing, right? So that's what I mean by high touch."
    },
    {
      "end_time": 8788.916,
      "index": 343,
      "start_time": 8760.145,
      "text": " So I want to say that naive techno-optimism is, if we look at the history of corporations that are developing technology, market technology advancement focused on the upside and not terribly focused on the downside, we look at four out of five doctors choose Camel cigarettes. We look at better living through chemistry, providing"
    },
    {
      "end_time": 8811.408,
      "index": 344,
      "start_time": 8789.326,
      "text": " DDT and parathion and malathion, we look at adding lead to gasoline in a way that took a toxic chemical that was bound in ore underneath the biosphere and sprayed it into the atmosphere ubiquitously and dropped about a billion IQ points off the planet, made everybody more violent in terms of its neurotoxicology effects."
    },
    {
      "end_time": 8830.572,
      "index": 345,
      "start_time": 8812.398,
      "text": " Trusting the groups that are making the upside on moving the thing to figure out the risks historically is not a very good idea. I'm mentioning that in terms of now trusting the AI groups to do their own risk assessment. If you think about the totality of risks,"
    },
    {
      "end_time": 8859.326,
      "index": 346,
      "start_time": 8830.879,
      "text": " Well, then you want to say, how do we move the positive applications of this technology forward in a way that mitigates the really negative applications of it? That if one wants to be a techno optimist responsibly, they have to be thinking about that well. So what about the AI companies that say we do third party testing for safety? Do you still consider that somehow internal because they're the ones going out? Depends. If so. When I."
    },
    {
      "end_time": 8887.5,
      "index": 347,
      "start_time": 8859.77,
      "text": " early in the process of getting into risk assessment. I had times where corporations asked me to come do risk assessment on a technology or process and then when I did an honest risk assessment they were not happy because what they wanted me to do was some kind of box checking exercise that wouldn't cost them very much and wouldn't limit what they were going to do so they had plausible deniability to say they had done the thing and move forward quickly because what they didn't want was for me to say actually there is no way for you to pursue the market viability of this that does not"
    },
    {
      "end_time": 8913.524,
      "index": 348,
      "start_time": 8888.097,
      "text": " Yeah, totally. I have seen this in the example of something like a"
    },
    {
      "end_time": 8935.794,
      "index": 349,
      "start_time": 8914.275,
      "text": " mining technology or a new type of packaging technology that is wanting to say why it's doing something that addresses some of the environmental concerns. It addresses the environmental concerns it identified. We identify a bunch of other ones that it doesn't address well, that it moves some of the harm from this area to the other one. That's an example of where some of the problem would come."
    },
    {
      "end_time": 8962.705,
      "index": 350,
      "start_time": 8936.323,
      "text": " But I find this is just as bad in the nonprofit space or in the government space as well, not just in the for-profit space, because they also have, even if it's not a profit motive, they have an institutional mandate and their institutional mandate is narrow. They can advance that narrow thing. This is now the same thing as an AI objective, right? If the AI has an objective function to optimize X, whatever X is, or optimize a weighted function of XYZ and metrics,"
    },
    {
      "end_time": 8992.312,
      "index": 351,
      "start_time": 8963.626,
      "text": " everything that falls outside of that set, harm can be externalized to that and achieve its objective function. So I remember talking to groups, UN-associated groups, they were working on world hunger and their particular solutions involved bringing conventional agriculture to areas in the world that didn't have it, which meant all of the pesticides, herbicides and nitrogen fertilizers. And it was a huge increase in nitrogen fertilizer by a bunch of river deltas"
    },
    {
      "end_time": 9016.152,
      "index": 352,
      "start_time": 8992.91,
      "text": " where it currently wasn't that would increase dead zones and oceans from that nitrogen affluent that would affect the fisheries in those areas and everything else and the total biodiversity. And when I brought it up to them, they're like, oh, I guess that's true. But those are not the metrics we're tasked with. You know, we're tasked with how many people get fed this year, not how much the environment is ruined in the process."
    },
    {
      "end_time": 9044.633,
      "index": 353,
      "start_time": 9016.783,
      "text": " And so the reduction of the totality of an interconnected world to a finite set of metrics we're going to optimize for, whether it's one metric called net profit or GDP, or it's the metric of whatever the institution is tasked with or getting elected or something like that, it is entirely possible to advance that metric at the cost of other ones. And then it's entirely possible that"
    },
    {
      "end_time": 9073.49,
      "index": 354,
      "start_time": 9045.282,
      "text": " other groups who see that create counter responses to that who do the same thing in opposite directions and the totality of human behavior optimizing narrow metrics while both driving arms races and externalizing metrics in wide areas is at the heart of the coordination failures we face and so it happens to be that this is already something that we see with humans outside of AI but giving an AI an objective function"
    },
    {
      "end_time": 9102.841,
      "index": 355,
      "start_time": 9074.838,
      "text": " is the same type of issue. I was mentioning examples in the nonprofit space. I think there are examples of how to do AI safety that can also be dangerous. Somebody proposes an idea like, here's a type of AI that"
    },
    {
      "end_time": 9131.886,
      "index": 356,
      "start_time": 9103.234,
      "text": " could be good and we should build it, or on the other side, here's a AI safety protocol that would be good and we should instantiate it in regulation or whatever. We want to red team those ideas, meaning see how they break or fail, and violet team them, meaning see how they externalize harm somewhere else that they didn't intend before implementing them, which just means think through the causal set beyond the obvious set you're intending it for."
    },
    {
      "end_time": 9160.128,
      "index": 357,
      "start_time": 9132.875,
      "text": " There was this call for a six-month pause on training large language models bigger than GPT-4. I'm not saying that a pause is a bad idea. I'm saying as instantiated, it's not implementable and it's not obviously good. You saw the pushback as people were like, all right, so that means"
    },
    {
      "end_time": 9189.189,
      "index": 358,
      "start_time": 9160.452,
      "text": " that whatever actors are not included in this, which might mean bad actors, rush ahead relatives. That's a real consideration. And one has to say, okay, so are we stopping the accumulation of larger GPU clusters during that time? Are we stopping the development of larger access to larger data sets during that time that we'll be able to quickly configure them? Are we also stopping, you know,"
    },
    {
      "end_time": 9212.978,
      "index": 359,
      "start_time": 9189.684,
      "text": " There are plenty of other types of AI that are not LLMs being deployed to the public, but that are very powerful. Black Rocks Aladdin played some role in the fact that it has more assets under management than the GDP of the United States. There are military applications in development. Can you"
    },
    {
      "end_time": 9234.053,
      "index": 360,
      "start_time": 9213.268,
      "text": " What is the actual risk space and are we talking about slowing the whole thing or are we talking about slowing some parts relative to other parts where these kind of game theoretic questions emerge? How would we ensure that the whole space was slowing? How would we enforce that? Those are all things that have to be considered. You mentioned that GDP is not a great indicator and"
    },
    {
      "end_time": 9256.425,
      "index": 361,
      "start_time": 9234.701,
      "text": " GDP goes up with war and more military manufacturing. It goes up with increased consumerism and the cost to the environment. It goes up with addiction. Addiction is great for lifetime value of a customer. So there's something called good hearts law. So I'm sure you're familiar with good. Is this at the core of what you're saying? It's like, hey, it's hard to explain it."
    },
    {
      "end_time": 9284.224,
      "index": 362,
      "start_time": 9256.937,
      "text": " As soon as you have a metric that you try to optimize for, it ceases to become a good metric. For instance, I think this is from The Simpsons, but it may be real, that there is a town overrun by snakes or rats. I think it was rats. And then you say, hey, give me rat tails because it implies that you killed a rat. I'll give you a dollar every time you bring me a rat tail and we'll reduce the amount of rats. Maybe it initially did so, but then people realized I can farm rats and then just kill them and give you tails. And thus I have more total rats. This is a much more general phenomenon. So perhaps if you think Twitter follows"
    },
    {
      "end_time": 9312.142,
      "index": 363,
      "start_time": 9286.561,
      "text": " There are perverse forms of fulfilling that metric, meaning there's a way to fulfill that metric that either no longer provides the good or it provides the good while also affecting some other bats, which basically means you probably thought of that metric in a specific context. There's a bunch of wild rats and the only way to get a rat tail is to kill a wild rat, not in the context of farmed rats."
    },
    {
      "end_time": 9335.742,
      "index": 364,
      "start_time": 9312.432,
      "text": " And so it kind of relates to the topic we were mentioning earlier about government that it's not just instantiating a government that makes sense on the current landscape, but recognizing the landscape is going to keep changing and it'll change in a way that has an incentive to figure out how to control the regulatory systems and how to game the metric systems."
    },
    {
      "end_time": 9356.169,
      "index": 365,
      "start_time": 9336.374,
      "text": " So with regard to the topic of AI alignment, right, because if we tell the AI maximize the number of rat tails, then it could like Bostrom's paperclip maximizer. Before we continue, it's imperative that we have a brief overview of Bostrom's thought experiment called the paperclip maximizer."
    },
    {
      "end_time": 9368.336,
      "index": 366,
      "start_time": 9356.886,
      "text": " The paperclip maximizer scenario, initially conceived by philosopher Nick Bostrom in 2003, illustrates the potential dangers associated with misaligned goals of artificial general intelligence, that is, AGI, agents."
    },
    {
      "end_time": 9395.009,
      "index": 367,
      "start_time": 9370.708,
      "text": " In this hypothetical scenario, an AGI is tasked with the seemingly innocuous goal of maximizing the number of paperclips it produces. However, rather than competence and focus serving as a salutary quality, it's in fact due to its extreme competence and single-minded focus that it proceeds to transform the entire planet and eventually the universe into paperclips, annihilating humanity and all of life in the process."
    },
    {
      "end_time": 9402.739,
      "index": 368,
      "start_time": 9395.009,
      "text": " The core ideas to understand from this scenario are the importance of value alignment, the orthogonality thesis, and instrumental convergence."
    },
    {
      "end_time": 9426.118,
      "index": 369,
      "start_time": 9404.036,
      "text": " Value alignment is the process of ensuring that an AGI shares our values and goals in order to prevent cataclysmic outcomes, such as the aforementioned paperclip maximizer. The orthogonality thesis states that intelligence and goals can be independent, implying that a highly intelligent AGI can have arbitrary goals. You hear this, by the way, when people say that we've become more knowledgeable with time, yet our ancestors were wiser."
    },
    {
      "end_time": 9447.21,
      "index": 370,
      "start_time": 9426.118,
      "text": " Instrumental convergence refers to the phenomenon where diverse goals lead to similar instrumental behaviors like resource acquisition and self-preservation. For instance, as Marvin Minsky points out, both goals of prove the Riemann hypothesis and make paper clips may result in all of the Earth's resources being dismantled, disintegrated in an effort to accomplish these goals."
    },
    {
      "end_time": 9477.927,
      "index": 371,
      "start_time": 9453.968,
      "text": " Thus, despite the ultimate goal being different, for instance, the Riemann hypothesis and make paper clips are not the same, there's a convergence along the way. What's often overlooked in AGI development is something called the value loading problem. This refers to the difficulty of encoding our moral and ethical principles into a machine. That is, how do you load the values? Keep in mind that AGI needs to be corrigible and robust to distributional shifts."
    },
    {
      "end_time": 9506.459,
      "index": 372,
      "start_time": 9477.927,
      "text": " AGI, or even baby AGI, needs to maintain its alignment even when encountering situations deviating from its training data. Additionally, something that we want is that the AGI should be able to recognize ambiguity in its objectives and seek clarification rather than optimizing based on flawed interpretations. Of course, we as people have ambiguity and flawed interpretations. The difference is that AGI could decidedly exacerbate our own existing known and unknown flawed nature."
    },
    {
      "end_time": 9533.558,
      "index": 373,
      "start_time": 9506.459,
      "text": " Another difference is that we can't replicate on a second to second or millisecond to second basis. At least not yet. One promising approach to this AGI alignment scenario or misalignment scenario is something called reward modeling. This involves estimating a reward function based upon observing our preferences rather than us providing predefined objectives. And some of the more hilarious examples I found of the predefined sort are as follows."
    },
    {
      "end_time": 9555.111,
      "index": 374,
      "start_time": 9533.78,
      "text": " The aircraft landing problem was explicated in 1998 when Feldet attempted to evolve an algorithm for landing aircraft using genetic programming. The evolved algorithm exploited some overflow errors in the physics simulator, creating extreme forces that were estimated to be zero because of the error, resulting in a perfect score without actually solving the problem that it was intended to solve."
    },
    {
      "end_time": 9582.739,
      "index": 375,
      "start_time": 9555.111,
      "text": " Another example is the case of the Roomba. In a tweet, Custard Smigley described connecting a neural network to this Roomba to navigate without bumping into objects. The reward scheme encouraged speed and discouraged hitting the bumper sensors. Okay, so think about it. What could happen? Well, the Roomba learned to drive backward. There are no sensors in the back, so just went about bumping frequently and merrily. In a more recent example, a reinforcement learning agent was trained to play the video game Road Runner."
    },
    {
      "end_time": 9613.131,
      "index": 376,
      "start_time": 9583.336,
      "text": " It was penalized for losing in level 2, so did it just become fantastic at the game? Not quite. The agent discovered that it could kill itself at the end of level 1 to avoid losing in level 2, thus exploiting the reward system without actually improving its performance. What would happen if it was tasked to keep us from hitting some tipping point? By the way, is living more valuable than not living? What's the rational answer to this? This is perhaps the most important fundamental question."
    },
    {
      "end_time": 9643.422,
      "index": 377,
      "start_time": 9616.118,
      "text": " With regard to the topic of AI alignment, because if we tell the AI maximize the number of rat tails, then it could, like Bostrom's paperclip maximizer, start clear cutting forests to grow massive factory farms of rats and whatever. You can do the reducto ad absurdum of a very powerful system."
    },
    {
      "end_time": 9673.746,
      "index": 378,
      "start_time": 9644.872,
      "text": " So then the question is, you say, okay, well, do the rat tails or the GDP or whatever it is while also factoring this other metric. Okay, well, you can do those two metrics and there's still something armed. What about these three? The question is, is there a finite describable set of things that is adequate for something that can do optimization that powerfully, right? That is a way of thinking. And so it's, is there a finitely describable definition of good?"
    },
    {
      "end_time": 9703.712,
      "index": 379,
      "start_time": 9674.804,
      "text": " is another way of thinking about it, right? Or in terms of optimization theory. Yes, that's something I think about the misalignment problem. Is it in principle impossible to make the explicit what's implicit? When we state a goal, it carries with it manifold unstated assumptions. For instance, I say bring me coffee or bring me Uber food. We imply indirectly don't run over a pedestrian to bring me the Uber food. Don't take it from the kitchen prior to it being packaged. Don't break through my door to give it to me."
    },
    {
      "end_time": 9729.275,
      "index": 380,
      "start_time": 9703.712,
      "text": " And we cloak all of that and say, that's just common sense. Common sense is extremely difficult to make explicit. Even object recognition is extremely difficult. And then as soon as we can get a robot to do something that is human-like, then it becomes more and more black box-like. And then you have this huge problem of interpretability of AI. So it is an extremely difficult problem. And I wonder how much of the misalignment problem is just that? Is it just the fact that we can't make explicit what's implicit?"
    },
    {
      "end_time": 9755.623,
      "index": 381,
      "start_time": 9729.275,
      "text": " and we overvalue how much the explicit matters and implicit is far more complex. I don't know. This is just something that I'm putting out there and asking. In other words, to relate this to what you were saying is, is it finite? And even if it's finite, is it like a tractable amount of finiteness that either we can handle it or we can design an AI that we feel like we have a handle over that can understand it? Yeah. And if you try to say, okay, can I"
    },
    {
      "end_time": 9784.462,
      "index": 382,
      "start_time": 9756.203,
      "text": " mine myself, my brain, for all the implicit assumptions and put them all. I think every version of the thought experiment you realize you can't. But even if you do, that's only the ones that are associated with the kinds of context you've been exposed to so far. But there are a heap of things that nobody has ever done that maybe an AI could do that now also have to be factored in there that you didn't"
    },
    {
      "end_time": 9798.387,
      "index": 383,
      "start_time": 9785.179,
      "text": " There's also something that because humans all"
    },
    {
      "end_time": 9823.985,
      "index": 384,
      "start_time": 9799.787,
      "text": " co-evolved and have similar-ish nervous systems and all kind of need to breathe oxygen and want to universe a world that has similar physics and whatever. There's some stuff where the implicit processing is kind of baked into the evolutionary process that brought us that is not true for a silica-based system. It has totally different, it is not subject to the same physical constraints, right? It could optimize itself in a very different physical environment."
    },
    {
      "end_time": 9848.524,
      "index": 385,
      "start_time": 9824.804,
      "text": " And so even the thing that we would call just kind of an intuitive thing is very different for a very different type of system. So I would say when it comes to the topic of AGI alignment, there are different positions on alignment. I would say the strongest position"
    },
    {
      "end_time": 9875.725,
      "index": 386,
      "start_time": 9849.326,
      "text": " is AGI alignment is not well, first we actually have to discuss what we even mean by alignment, right? Because initially the topic of alignment means, can we ensure that the AI is aligned with human values and human intentions? So that when you say, bring me a cup of coffee, that you're all those implicit intentions that you have are not damaged in the process. But if we look at the"
    },
    {
      "end_time": 9897.602,
      "index": 387,
      "start_time": 9876.288,
      "text": " all of the animals and factory farms and the extinct species and the disruption to the environment and the conflict between humans and other humans and class subjugation and all those things, you can say human intent is not unproblematic. And exponentiating human intent as is, is not actually an awesome solution."
    },
    {
      "end_time": 9926.903,
      "index": 388,
      "start_time": 9898.746,
      "text": " And so do you want it to be aligned with human intention? Well, it currently looks like human intention has created a social sphere and a technosphere that is fundamentally misaligned with the biosphere they depend upon. And it is the technosphere social sphere complex is kind of autopoetically scaling while debasing the substrate it depends upon. In other words, it's on a self-termination path."
    },
    {
      "end_time": 9956.237,
      "index": 389,
      "start_time": 9927.517,
      "text": " So and that represents something like the collective intent of humans currently in this context. So if you ensure that the AI is is aligned with intent in the narrow obvious definition, that is also not a good definition of alignment. So insofar as the humans are not aligned, their intent is not aligned with the biosphere they depend upon and is not aligned with the well-being of other humans who will produce counter responses."
    },
    {
      "end_time": 9980.572,
      "index": 390,
      "start_time": 9956.783,
      "text": " and most of the time isn't even aligned with their own future good, as is the case with all addictive behavior, right? Sorry to interrupt. I'm so sorry. Is this a place where you disagree with Jadkowski or has he also expressed points that are in alignment with your point about alignment? I don't know if he has. There's nothing that I know of that I disagree with. I think when he's, uh,"
    },
    {
      "end_time": 10009.957,
      "index": 391,
      "start_time": 9981.271,
      "text": " I'm sure he's thought about this. I just haven't read that of him. When he's talking about alignment, he's talking about this more basic issue of, as he tries to give the example, if you have a very powerful AI and you ask it to do something that would be very hard for us to do, but should be a tractable task for it, like replicate the strawberry at a cellular level, that can you make an AI that could do that, that doesn't destroy the world in the process?"
    },
    {
      "end_time": 10036.817,
      "index": 392,
      "start_time": 10011.732,
      "text": " Even that level, not being clear how to do at all, is the thing he's generally focused on. I'm sure he has deeper arguments beyond it that if we got that thing down, what else would we have to get? If we look at all of the social media issues that the social dilemma addressed, where"
    },
    {
      "end_time": 10059.701,
      "index": 393,
      "start_time": 10039.087,
      "text": " You can say, Facebook can say we're giving people what they want or TikTok or the YouTube algorithm or Instagram or whatever, because we're not forcing people to use it. Except it's saying we're giving people what we want in the same way that the drug dealers that gives drugs to kids is saying that, which is we can create addiction. We can prey on"
    },
    {
      "end_time": 10089.292,
      "index": 394,
      "start_time": 10059.991,
      "text": " the lower angels of people's nature and if they're individual people who don't even know they're in such a competition and we're talking about a major fraction of a trillion dollar organization employing supercomputers and AI in an asymmetric warfare against them to say we are giving them what they want while engineering what they want that's you know it's a tricky proposition but we can see how the algorithm that optimizes for in whether it's time on site or engagement both have"
    },
    {
      "end_time": 10116.852,
      "index": 395,
      "start_time": 10089.633,
      "text": " happened. That's a perverse metric because you can get it through driving addiction and driving tribalism and driving fear and limbic hijacks and all those things. What's important to acknowledge, that's already an AI. It's already a type of artificial intelligence that is collecting personal data about me and then looking at the totality of content that it can draw from and being able to create a newsfeed for me"
    },
    {
      "end_time": 10144.104,
      "index": 396,
      "start_time": 10117.227,
      "text": " that continues to learn based on what is stickiest for me, what I engage with the most. Now, in this case, the AI is creating the content. It's just choosing which content gets put in front of people. In doing so, it is now incentivizing all content creators to create the content that does best on those algorithms. So it's actually, in a way, farming all human content creators"
    },
    {
      "end_time": 10162.551,
      "index": 397,
      "start_time": 10145.589,
      "text": " Because it's incentivizing them to do whatever it is that is within the algorithm's bidding. Now, as soon as we have synthetic media, which is rapidly emerging, where we can not just have humans creating whatever the TikTok video is, but we can have deep fake versions of them that are being created very rapidly."
    },
    {
      "end_time": 10192.841,
      "index": 398,
      "start_time": 10163.012,
      "text": " And now you have a curation AI where that first one's AI was just to curate the stickiest stuff personalized to people and creation AIs that can be creating multiple things to split test relative to each other and the feedback loop between those, you can just see that the problem that has been there just hypertrophies. Let alone the breakdown of the epistemic comment and the ability to tell what is real and not real and all those types of issues."
    },
    {
      "end_time": 10221.903,
      "index": 399,
      "start_time": 10193.933,
      "text": " I want to come back to where we were. So obviously the curation algorithm is saying that it's aligned with human intent, but not really, right? It's aligned with human intent because it's giving stuff that they empirically like because they're engaging with it. But most people then actually end up having regret of how much time they spend on those platforms and wish that they did less of it. And they don't plan in the day, I want to spend this much time doom scrolling. And so is it"
    },
    {
      "end_time": 10248.439,
      "index": 400,
      "start_time": 10222.756,
      "text": " Is it really aligned with their intent? And in general, is aligning with intent that includes the lowest angels of people's nature type intense? Is that a good thing? Is that a good type of alignment when you factor the totality of effects it has? So we could say that the solution to the algorithm issue should be that"
    },
    {
      "end_time": 10273.558,
      "index": 401,
      "start_time": 10249.138,
      "text": " Because the social media platform is gathering personal data about me and it's gathering based on its ability to model my psyche based on all of whom my friends are and what I like and what I don't like and all those things and my mouse hover patterns. It has an amount of data about me that can model my future behavior better than a lawyer or a psychotherapist or anybody else could."
    },
    {
      "end_time": 10287.398,
      "index": 402,
      "start_time": 10274.599,
      "text": " So there are provisions in law of privileged information. If you have privileged information, what are you allowed to do with it? And there are provisions in law about undue influence."
    },
    {
      "end_time": 10315.776,
      "index": 403,
      "start_time": 10287.671,
      "text": " The platforms are gathering privileged information, that they have undue influence. As a result, there should be a fiduciary responsibility. This is one of the things that we do when there's a radical asymmetry of power. Because if there's a symmetry of power, we say caveat emptor, buyer beware. It's kind of on you to make sure that you don't get sold a shitty thing or engage with. But if there's a radical asymmetry of power, can you tell the kid buyer beware about an adult that is playing them? No, you can't, right?"
    },
    {
      "end_time": 10343.712,
      "index": 404,
      "start_time": 10316.22,
      "text": " And so in that way, can the person who isn't a doctor know that they really don't need a kidney transplant if the doctor tells them that they do because the doctor gets paid when they give kidney transplants? Well, that's so bad. We don't want that to happen. We make law saying doctors can't do that. There's a Hippocratic oath to act not just in their own economic interest, but in they are an agent on behalf of the principal because the principal cannot buy or beware, right?"
    },
    {
      "end_time": 10371.664,
      "index": 405,
      "start_time": 10344.804,
      "text": " And so then there is a board of other doctors who are also at that upper asymmetry who can verify did the person do malpractice or not. Same with the lawyer. If the lawyer wanted to just bill by the 15 minute sections maximally to drain as much money from me, they could because there's no way I can know that what they're telling me about law is wrong because they have so much asymmetric knowledge relative to me that we have to make that illegal. We have to make sure that the lawyer is a"
    },
    {
      "end_time": 10398.558,
      "index": 406,
      "start_time": 10372.927,
      "text": " agent on behalf of me as the principal. So with lawyers and doctors and therapists and financial advisors, we have this fiduciary principal agent binding thing. And it's because there's such an asymmetry that there cannot be self-protection. If I'm engaging with them and giving them this privileged information and they wanted to fuck me, they could. But for my own well-being, I have to engage with them and give"
    },
    {
      "end_time": 10426.561,
      "index": 407,
      "start_time": 10400.333,
      "text": " have some legal way of binding that. But of course, in the case to bind it where the lawyers all have some practice law that they can be bound by, they can be shown they did malpractice, and same with doctors, that requires a legal body of lawyers or a body of doctors that can assess if what that doctor or lawyer did was wrong. So somebody else that has even higher asymmetry, right, the group of the top thinkers, this becomes very hard when it comes to AI."
    },
    {
      "end_time": 10457.398,
      "index": 408,
      "start_time": 10428.302,
      "text": " So let's start by saying rather than the AI being a rivalrous relationship with me when I'm on social media and it is actually gathering the information about me not to optimize my well-being but to optimize ad sales for the corporation that is the platform and the corporations that are its actual customers. In which case it has the incentive to"
    },
    {
      "end_time": 10481.63,
      "index": 409,
      "start_time": 10457.858,
      "text": " prey on the lowest angels of my nature and then be able to say it was my intent and I had free choice. So we could say that should be a violation of the principal agent issue. And because there's undue influence, we can show there's undue influence. Consilience Project, we wrote some articles on this. There's one on undue influence that makes this argument more deeply. And"
    },
    {
      "end_time": 10509.309,
      "index": 410,
      "start_time": 10482.09,
      "text": " Because you can show it's gathering privileged information, it should be in fiduciary relationship where it has to pay attention to my goals and optimize aligned with my goals rather than I'm the product and it's optimizing with the goals of the corporation or its customers, right? In order to do that, that would change its business model. It couldn't have an ad model anymore. I would either have to pay a monthly fee for it or the state or some commons would have to pay for it and everybody had access to it or some other things."
    },
    {
      "end_time": 10541.032,
      "index": 411,
      "start_time": 10511.698,
      "text": " That seems like a very good step in the right direction. And that is an alignment issue thing, right? The principal agent issues a way of trying to solve the alignment, which is to say that this more powerful AI system here, the curatorial AI, social media, would be aligned with my interest in bound in some way. And maybe we would extend that to all types of AI. Well, of course, in the AGI case where it becomes fully autonomous,"
    },
    {
      "end_time": 10564.616,
      "index": 412,
      "start_time": 10541.698,
      "text": " and becomes more powerful than any other systems. What other system can check it to see if what it is doing is actually aligned or not? There isn't a group of lawyers. They can check that lawyer, right? So that becomes a big issue. And if it really becomes autonomous, as opposed to empowering a corporation, which is run by humans, it's different. And"
    },
    {
      "end_time": 10588.422,
      "index": 413,
      "start_time": 10565.043,
      "text": " So this is one part on the topic of alignment and alignment with our intention or well-being. You can do superficial alignment with our intention, which the social media thing already does, but it's not aligned with our actual well-being because an asymmetric agent is capable of exploiting your sense of intentionality. And similarly,"
    },
    {
      "end_time": 10606.357,
      "index": 414,
      "start_time": 10589.309,
      "text": " When you say there's a common sense that says, don't bring the door down, you're bringing me coffee, there should be a common sense that says, don't over fish the entire ocean and cut all the damn trees down and turn them into forests in the process of growing GDP and there is clearly not. Right?"
    },
    {
      "end_time": 10635.196,
      "index": 415,
      "start_time": 10607.159,
      "text": " And so we can see that the current, without AI, human world system already actually doesn't have that kind of check and balance in it in all the areas that it should just so long as the harms are externalized somewhere far enough that we don't instantly notice them and change them. So the question of what do we, if we have radically more powerful optimizer than we already have, what do we align its goal with?"
    },
    {
      "end_time": 10663.029,
      "index": 416,
      "start_time": 10635.623,
      "text": " If we just say align it with our intention, but it can change our intention because it can behavior mod me and we can't possibly deal with that because of the asymmetry, that's no good as in the Facebook case. If we try to align it with the interest of a nation state that can drive arms races with other ones and other nation states in war or drive it, align it with the current economy that's misaligned with the biosphere, that's not good. So the topic of alignment is actually an incredibly deep topic."
    },
    {
      "end_time": 10693.831,
      "index": 417,
      "start_time": 10664.855,
      "text": " And this now gets to what you've probably addressed on your show in other places. It gets to a very philosophic issue, which is the kind of is-ought issue, which is science can say what is, it can't say what ought, right? And that kind of distinction by Mill and others historically, and that the applied side of science as technology and engineering can change what is, but what ought to be, what is the ethics that is somehow compatible with"
    },
    {
      "end_time": 10721.203,
      "index": 418,
      "start_time": 10694.087,
      "text": " science is a challenge. The best answer we have had, arguably that came from the mind that created both a lot of our nuclear technology and our foundations of AI, von Neumann was game theory, right? That the idea that is good is the idea that doesn't lose. And we can arguably say that that thing instantiated by markets"
    },
    {
      "end_time": 10748.336,
      "index": 419,
      "start_time": 10721.749,
      "text": " and national security protocols has actually been the dominant definition of ought that ends up driving the power of technology. Because if science says, we can't say what ought, we can't. We can only say what is, but we're really fucking powerful at saying what is in a way that reduces to technology that changes what is where we can optimize some metrics and say it's good, even if we externalize a lot of harm to other metrics or optimize in groups at the expense of outgroups or whatever it is, right?"
    },
    {
      "end_time": 10777.722,
      "index": 420,
      "start_time": 10749.292,
      "text": " But we say that not only do we not have an ought, but that any system of ought is not the philosophy of science. So is, insofar as that's concerned, out of scope or gibberish, well, then what ends up guiding the power of technology? Markets do, and to some extent national security does. In other words, rival risk and theoretic kind of interests. And so what gets researched, the thing that has the most market potential?"
    },
    {
      "end_time": 10800.964,
      "index": 421,
      "start_time": 10778.848,
      "text": " And so then again it is, what is actually developing the technology? Because as Einstein said, like, I was developing science not knowing it was going to do that application, didn't want that application, wanted sciences for social responsibility. But what ends up, for the most part, the research that gets funded is R&D towards something."
    },
    {
      "end_time": 10816.237,
      "index": 422,
      "start_time": 10802.056,
      "text": " that ends up either advancing the interest of a nation state or the interest of a corporation or whatever the game theoretic metric set of the group of people that is doing the thing."
    },
    {
      "end_time": 10847.295,
      "index": 423,
      "start_time": 10817.517,
      "text": " And so what I would say is that as we get to more and more powerful is, more and more powerful science that creates more and more powerful tech and exponentially powerful tech, especially as we're already hitting fragility of the planetary systems. And when we say more powerful, we mean like exponentially more powerful, not iteratively more powerful. You have to have a system of auth powerful enough to guide, bind and direct it. Because if you don't, it is powerful enough to in whatever it is optimizing for,"
    },
    {
      "end_time": 10876.408,
      "index": 424,
      "start_time": 10848.029,
      "text": " destroy enough that what it optimizes for doesn't matter anymore. Now, this is a fundamentally deep metaphysical philosophical issue. And of course, when we talk about regulation, law, the basis of law is jurisprudence, right? And is ought questions, right? Applied ethics that get institutionalized for exactly this reason."
    },
    {
      "end_time": 10901.34,
      "index": 425,
      "start_time": 10876.903,
      "text": " And so when we say if we have tech that is powerful enough to do pretty much fucking anything, what should be guiding that and what should be binding it? And if we don't answer those well, what is the default of what we'll be guiding as and binding it currently? And what does that world look like? So this is super cheerful conversation. What is the call to action?"
    },
    {
      "end_time": 10931.442,
      "index": 426,
      "start_time": 10904.258,
      "text": " Okay, we're quite a far ways away from that. Let me try to expedite a couple other parts. When we were mentioning the AI risk, we said AI empowering bad actors. So you can think about whether a bad actor is a domestic terrorist who the best thing they can do right now is get an AR-15 and shoot up a transformer station to take down the power lines. AR-15 is a kind of tech that has some capability"
    },
    {
      "end_time": 10956.476,
      "index": 427,
      "start_time": 10933.933,
      "text": " getting a sense of being disenfranchised by the current system and being motivated to utilize what is at their resources to do something about it. The barrier of entry of the more powerful tech is getting lowered. You can put those things together."
    },
    {
      "end_time": 10980.794,
      "index": 428,
      "start_time": 10957.295,
      "text": " But whether you're talking about domestic terrorism like that, or you're talking about international terrorism from larger groups, or you're talking about full military applications. But let's just go ahead and say, can we make deep fakes that make the worst kinds of"
    },
    {
      "end_time": 11010.333,
      "index": 429,
      "start_time": 10981.323,
      "text": " Confusion, conspiracy theory, in-group, out-group thinking, propaganda, of course. That is an emerging technology that's eminent. Can we use people's voices and what looks like their video and text for ransom and fucked up stuff? You can think of all the bad actor applications and then you can pretty much apply it to... This is a piece of theory I wanted to say. Every technology"
    },
    {
      "end_time": 11034.65,
      "index": 430,
      "start_time": 11010.657,
      "text": " has certain affordances. When you build it, it can do things where without that technology, you couldn't do those things, right? A tractor allows me to do things that I couldn't do without a tractor, just the shell in terms of volume and types of work and various things. Every technology is also combinatorial with other tech because what a hammer can do if I"
    },
    {
      "end_time": 11057.773,
      "index": 431,
      "start_time": 11037.995,
      "text": " And obviously it requires the blacksmithing to make the hammer, right? So you have, you don't just have individual tech, you have tech ecosystems. And the combinatorial potential of these pieces of tech together have different affordances, right? So, but then what do we use it for is based on the motivational landscape."
    },
    {
      "end_time": 11083.439,
      "index": 432,
      "start_time": 11058.012,
      "text": " I can use something like a hammer to be Jimmy Carter and build houses for the homeless with Habitat for Humanity, or I can use it as a weapon and kill people with it. And so the tech has the affordances to do both of those. So the tech will be developed and utilized based on motivational landscapes. Sure. And just briefly, and going back to earlier, it's not just dual, because that would be double-edged, it's multipolar. Omni, yes. Okay."
    },
    {
      "end_time": 11105.964,
      "index": 433,
      "start_time": 11084.462,
      "text": " So what we can say is the tech will end up potentially getting utilized by all agents for all motives that that tech offers affordances relevant to their motives. And so when we're building a piece of tech, we don't want to think about what is our motive to use it. We want to think about"
    },
    {
      "end_time": 11129.548,
      "index": 434,
      "start_time": 11106.271,
      "text": " Are we making a new capacity that didn't exist before, lowering the barrier of entry to a particular kind of capacity, where now what are all the motives that all agents have who have access to that technology and what is the world that results from everybody utilizing it that way? That's factoring second, third, fourth order thinking into the development of something, a new capacity that changes the landscape of the world."
    },
    {
      "end_time": 11160.742,
      "index": 435,
      "start_time": 11131.186,
      "text": " I would say that every scientist who is working on synthetic bio for curing cancer or AI for solving some awesome problem, every scientist and engineer and et cetera has an entrepreneur and ethical responsibility to think about the new capability they're bringing into the world that didn't exist, not just how they want to use it, but the totality of use that will happen by them having brought it into the world that wouldn't had they not."
    },
    {
      "end_time": 11190.981,
      "index": 436,
      "start_time": 11161.459,
      "text": " There is no current, when I say there's an ethical responsibility, there is no legal responsibility. There is no fiduciary responsibility where you are liable for the harms that get produced by a thing that you bring about that someone else reverse engineers and uses a different way. But there is financial incentive and Nobel prizes for developing the thing for your purpose. And then again, socializing the losses of whatever anybody else does with it. So this is one of those cases where the personal near term narrow motive"
    },
    {
      "end_time": 11219.667,
      "index": 437,
      "start_time": 11191.323,
      "text": " This is us being fucking narrow AIs in an ethical sense, is to do the thing even if the net result of the thing ends up being catastrophically harmful, right? So the incentive deterrent, the motivational landscape is messed up. So every tech, now I want to make a couple more philosophy of tech arguments. Tech is not values neutral, meaning that the hammer is not good or bad. It's just a hammer and whether you use it to build a house for the homeless or"
    },
    {
      "end_time": 11249.343,
      "index": 438,
      "start_time": 11220.009,
      "text": " unbeat someone's head is up to you that the motivational landscape and the tech have nothing to do with each other. This is not true. If a technology gives the capacity to do something that provides advantage, relative advantage in a competitive environment, whether it's one nation state competing with another nation state or one corporation or one tribe with another one. If it provides significant competitive advantage, if you use it a particular way,"
    },
    {
      "end_time": 11275.913,
      "index": 439,
      "start_time": 11250.503,
      "text": " then anyone using it that way creates a multipolar trap that obligates the others to use it that way or a related way. And so we end up getting a couple of things, right? This is the classic example I've used a lot is if we think about, and it's because there's been so much analysis on this example. If you think about the plow as a technology,"
    },
    {
      "end_time": 11297.363,
      "index": 440,
      "start_time": 11276.22,
      "text": " that was one of the key technologies that moved us from sub Dunbar number, hunter gatherer, maybe horticultural subsistence cultures to large agricultural civilizations. The plow is not a neutral technology where you can choose to use it or not choose to use it. The populations that used it"
    },
    {
      "end_time": 11318.319,
      "index": 441,
      "start_time": 11297.892,
      "text": " made it through famines and grew their populations way faster than the ones who didn't and they use their much, much larger populations to win at wars, right? So the meme set that goes along with using it ends up making it through evolution. The meme set that doesn't, doesn't make it through evolution. And correspondingly, there are, in order to implement the tech,"
    },
    {
      "end_time": 11341.834,
      "index": 442,
      "start_time": 11318.763,
      "text": " It has ethical consequences. I had to clear cut in order to do the kind of row cropping that the plow really makes possible. I have to get an open piece of land that is now being used for just human food production and not any other purpose. So I'm going to clear cut the forest or a meadow or something to be able to do that."
    },
    {
      "end_time": 11372.551,
      "index": 443,
      "start_time": 11342.756,
      "text": " I'm already starting the Anthropocene with that, right? Changing natural environments from whatever habitat and home they were for all the life that was there to now serving the purpose of optimizing human productivity. And I have to yoke an ox and I probably have to castrate it and do a bunch of other things to be able to make that work and probably beat the ox all day to keep pulling the plow. In order to do that, I have to move from the animism of"
    },
    {
      "end_time": 11393.029,
      "index": 444,
      "start_time": 11373.456,
      "text": " I respect the great spirit of the buffalo and we kill this one with reverence knowing that as it nourishes our bodies, our bodies will be put in the dirt and make grass that its ancestors will eat and part of the great circle of life and whatever kind of idea like that too, it's just a dumb animal. It's put here for human purposes, man's dominion over, it doesn't have feelings like us, that kind of thing."
    },
    {
      "end_time": 11418.933,
      "index": 445,
      "start_time": 11393.609,
      "text": " which then spills over to, it's just, it's not like us. So we remove our empathy from it and we apply that to other races, other classes, other species, other whatever, right? So something like the plow is not values neutral. To be able to utilize it, I have to rationalize its use, realizing it creates certain externalities. And if I see those externalities, I have to have a value system that goes along with that."
    },
    {
      "end_time": 11444.445,
      "index": 446,
      "start_time": 11419.787,
      "text": " Wait, sorry, to be particular with the word choice, it's not that the plow is not value neutral, it's the use of the plow. Exactly, exactly. And that the plow doesn't use itself, right? And so the use of the plow is not value neutral. Now, a life where I am hunting and gathering versus a life where I'm plowing are also totally different lives. So it codes a completely different behavior set."
    },
    {
      "end_time": 11458.063,
      "index": 447,
      "start_time": 11444.974,
      "text": " In doing that, it makes completely new mythos, which is why the hunter-gatherer mythos and the agrarian mythos are completely different. They have different views towards men and women and towards sky gods versus animism and towards all kinds of things."
    },
    {
      "end_time": 11488.626,
      "index": 448,
      "start_time": 11458.712,
      "text": " And so, but the other thing is that it provides so much game-theoretic advantage of those who use it relative to those who don't, that when they hit competitive situations, that's why there are not many hunter-gatherers left and why the whole society went to agricultural. So it's not just that the tech, so the tech code, the tech requires people using it, which changes the patterns of human behavior. Changing the patterns of human behavior automatically changes the patterns of perception and human psyche, metaphors, cultures, et cetera."
    },
    {
      "end_time": 11516.357,
      "index": 449,
      "start_time": 11488.882,
      "text": " and the externalities that it creates and the benefits that it causes become implicit to the value system because the value system can't be totally incommensurate with the power system, right? And so the dominant narrative ends up becoming support for, one could argue, apologism for the dominant power system. And because we can't feel totally bad about how we meet our needs, so we have to have a value system that deals with the problems of how we do so."
    },
    {
      "end_time": 11541.664,
      "index": 450,
      "start_time": 11516.732,
      "text": " But then it's also that the tech that does this becomes obligate because when anyone is using it, everyone else has to, or they kind of lose by default. So when you recognize that tech affects the technology, when utilized, affects patterns of human behavior, humans now do the thing they do with the tech. So people do this thing and they didn't used to do this thing on the cell phone or whatever."
    },
    {
      "end_time": 11560.452,
      "index": 451,
      "start_time": 11542.654,
      "text": " To get the benefits of the tech, you have a totally different pattern of human behavior. As a result, you have different nature of mind. You have different value systems that emerge and it becomes obligate or some version of a compensatory tech becomes obligate because the Amish are not really shaping the world. They're no longer engaged in the great power competition."
    },
    {
      "end_time": 11588.968,
      "index": 452,
      "start_time": 11561.101,
      "text": " I have a bone to pick there. I watched a few months ago and I don't know anything about the Amish or didn't know anything about the Amish and I'm just someone who grew up in this city and so I dismissed them as Luddites, like we've used that term several times and they're backward, they don't know what they're talking about. And then I watched a video, the Amish aren't idiots, they're not asinine. There's a reason why they do what they do and they either explicitly or intuitively understand that the technology changes the social dynamics in the way that they view the world and"
    },
    {
      "end_time": 11616.852,
      "index": 453,
      "start_time": 11589.565,
      "text": " Totally and has ethical considerations. So but that influenced me that influenced perhaps millions of people because it's a video. I think it has a few million hits even if they're local just them saying, you know what? I don't care. I'm going to continue to act right. It can still influence outward anyway, and we're talking about it now. Maybe this will influence people and hopefully to something positive and I hopefully to myself something positive. Yeah. Okay, so it's not that"
    },
    {
      "end_time": 11644.241,
      "index": 454,
      "start_time": 11617.21,
      "text": " You come to this a few times, which is even if you have a memeplex that doesn't become part of the dominant system, can it infect or influence the memeplex in a way that steers it? Yes. But one does not want to be naive about how much influence that's going to have. They want to be thoughtful about exactly how that'll work and what kinds of influences. As we mentioned, not all of Buddhism got picked up everywhere, right? Like the parts that had to do with"
    },
    {
      "end_time": 11673.797,
      "index": 455,
      "start_time": 11644.889,
      "text": " Why people should take vows of poverty and live on very little. That didn't really get picked up. The parts on how to reduce stress got picked up. The parts on what a healthy motive is didn't get picked up as much as the parts on how to empower your motive through a more functional mind. So it's important to get that the memes might live in a complex, in a context when they influence parts of them are going to interact with another memeplex and the technoplex and everything else. And so"
    },
    {
      "end_time": 11699.206,
      "index": 456,
      "start_time": 11674.48,
      "text": " You are right to say that it's not that they have no influence, but obviously the omission, not speaking to that they're dumb and backwards, but that in their don't want to engage tech for these reasons argument, they don't have a significant say in whether we engage a particular nuclear war or not. Or they were not the ones that overfished the ocean caused species extinction, but they also couldn't stop it."
    },
    {
      "end_time": 11713.831,
      "index": 457,
      "start_time": 11700.128,
      "text": " They are not the ones that are creating synthetic biology that can make totally new species. And this is why I say there is a naive techno-optimism that focuses on the upsides and doesn't focus on all the nth order effects and downsides."
    },
    {
      "end_time": 11731.237,
      "index": 458,
      "start_time": 11714.394,
      "text": " And as we were just mentioning, the externalities of tech are not just physical, right? You do this mining to get the thing you want, but there's a lot of mining pollution or the herbicide does make farming easier in this way, but it harms the environment and human health in this other way. That's physical externalities, but you also get these psychosocial externalities."
    },
    {
      "end_time": 11749.462,
      "index": 459,
      "start_time": 11731.237,
      "text": " Use facebook for this purpose and it ends up eroding democracy and doubling down on bias and increasing addiction and body dysmorphia and things like that right so the tech effects not it doesn't it has effects that are not the ones you intended some of which might be positive you can have a positive externality but it might have a lot of negative externalities and those"
    },
    {
      "end_time": 11775.06,
      "index": 460,
      "start_time": 11749.974,
      "text": " Negative externalities can cascade second, third, fourth order effects. So there's a naive techno-optimism that doesn't pay enough attention to that. There's a naive techno-pessimism that says, well, I'm aware of those negative externalities. I don't want those for our people. We think we can isolate our people from everybody else and say, we're going to not do that. But we're going to have decreased influence over what everyone who is doing that has."
    },
    {
      "end_time": 11802.329,
      "index": 461,
      "start_time": 11775.998,
      "text": " right, which is what then some of the AI labs argue is there's an arms race, we can't stop the arms race on it. And so only being at the front of the arms race can we steer it. I would argue that that is a naive version of that particular thing, but nonetheless. So if we want to, you know, in one way of reading one of the problematic lessons of the elves in Tolkien,"
    },
    {
      "end_time": 11821.408,
      "index": 462,
      "start_time": 11803.166,
      "text": " Is and I'm just making this as like a toy model is in some ways they figured out how to have a nicer life than the humans and dwarves and whatever else they were able to do radical life extension and figure out great GDP per capita where the poorest people were doing well."
    },
    {
      "end_time": 11835.111,
      "index": 463,
      "start_time": 11821.954,
      "text": " And they were so kind of, but they became insular because they were so disenchanted with the world of men and elves and whatever that they're like, fuck it, we're just going to kind of isolate and do our own thing our own way. But it ends up being that you're still all sharing Middle Earth."
    },
    {
      "end_time": 11864.241,
      "index": 464,
      "start_time": 11835.52,
      "text": " and the problem somewhere else can cascade into catastrophic problems that end up messing up your world too. So you can't be isolationist forever in an interconnected world. So they actually had to, they were kind of obligated if we rewrote the story to take whatever they had learned and try to help everybody else have it or have enough of it that you didn't get work dominance and stuff like that. So basically arguing that a isolationist, we see a problem, we're going to avoid that for ourselves."
    },
    {
      "end_time": 11895.043,
      "index": 465,
      "start_time": 11865.418,
      "text": " doesn't work with planetary problems. And so I'm not interested in the naive techno negative versions that say because we see a problem with tech we're not going to do it, but we're also going to kind of load a seat in the process and not engage with the fact that we actually care about what happens for the world overall and we have to engage with how the world as a whole is doing that thing. Makes sense what I mean by the naive techno pessimism. And it is that"
    },
    {
      "end_time": 11902.227,
      "index": 466,
      "start_time": 11895.828,
      "text": " You do not get to do effective isolationism on an interconnected planet that is hitting planetary boundaries with exponential tech."
    },
    {
      "end_time": 11932.534,
      "index": 467,
      "start_time": 11903.131,
      "text": " Yeah, I guess what I'm trying to express is that even the Amish, with what they're doing, it's not as simple as the memeplex that's exported by the Amish is the Amish memeplex. There's something else that even influenced them, even yourself. You may be in a position that saves Earth, at least for now, from a hugely catastrophic event. Same with Jodkowski and same with some others. But what influenced you? There's some good in you, hopefully, that was influenced by something else, by something else that's good, which also influenced the Amish. Each person is corrupt in some way."
    },
    {
      "end_time": 11961.578,
      "index": 468,
      "start_time": 11932.534,
      "text": " So I'm saying that there's something that's like the unity of virtues that influences us and that as long as we go back and we think or constantly we're assessing ourselves and saying is what I'm doing good then these other meme plexes that are being thrown to us and yes it's in a different context somehow it can orient and pick up the good. We're completely on the same page which is that that happens does not always happen and that that is a important thing to have happen."
    },
    {
      "end_time": 11988.183,
      "index": 469,
      "start_time": 11962.449,
      "text": " But if that happened adequately or at the, yeah, I will say adequately, then we wouldn't have extinct all the species that we have. We would not have turned as many old growth forests into deserts. We would not have had as many genocides and unnecessary wars and et cetera. So seeing the failure and where either someone's definition of good is too narrow"
    },
    {
      "end_time": 12014.07,
      "index": 470,
      "start_time": 11989.07,
      "text": " get our God to win and fuck everybody else or grow GDP and that'll take care of everything. We can well-intendedly pursue a definition of good that's too narrow and externalize harm unintentionally. We can pursue a definition of good that we really believe in, that other people don't believe in, and our answer is to win over them, but it creates an arms race where now we're in competition over the thing. Or where there are people who are really not pursuing the good of all, even"
    },
    {
      "end_time": 12038.951,
      "index": 471,
      "start_time": 12014.599,
      "text": " They're not even trying to, right? Whether it's sociopathy from a head injury or genes or trauma or whatever it is, they are pursuing a different thing, right? But they're good at acquiring power. And this is actually a very important thing is that the psychologies that want power and are good at getting it,"
    },
    {
      "end_time": 12069.189,
      "index": 472,
      "start_time": 12041.032,
      "text": " And the psychologies that would be the best stewards of power for the well-being of all are not the same set of psychological attributes. They're pretty close to inversely correlated. So those types of things have to be calculated in this. So what you're saying right now, which is great you're saying it, is that there is some odd impulse that is not only an is impulse, right? That you're calling a universal virtue or good or something."
    },
    {
      "end_time": 12099.206,
      "index": 473,
      "start_time": 12069.838,
      "text": " and that you're saying that some people feel very called by that and that that's important. I agree completely. Now, why is that historically and currently not a strong enough binding is the important question. Why has that not been a strong enough binding for all the species that are extinct and all the animals and factory farms and all the disruption, et cetera?"
    },
    {
      "end_time": 12127.278,
      "index": 474,
      "start_time": 12099.633,
      "text": " And then what would it take for it to become a strong enough binding or the nature of the question here, right? That's actually at the heart of the metacrisis question is to have like, what is a system of ought that is actually commensurable with the system of is and what is a way of having that actually sufficiently influenced behavior such that the catastrophic behaviors don't occur."
    },
    {
      "end_time": 12151.203,
      "index": 475,
      "start_time": 12128.404,
      "text": " and that the nature of the influence is not so top-down that it becomes dystopic and that's something like is there either a so one way of thinking about this is I've mentioned the term a couple times superstructure social structure infrastructure that comes from Marvin Harris's work on cultural materialism basically saying every civilization"
    },
    {
      "end_time": 12180.674,
      "index": 476,
      "start_time": 12152.022,
      "text": " You can think of it in those ways. What is its kind of memeplex? What is its social coordination strategies? And what is the physical tooling set upon? It depends. And different social theorists will say which of these they think is most fundamental. That the value systems are ultimately what steer behavior and determine the types of tech we build and the types of societies. Religious thinkers think there, right? Enlightenment thinkers think there. The social system actually, whatever you incentivize financially is what's going to win."
    },
    {
      "end_time": 12208.524,
      "index": 477,
      "start_time": 12181.084,
      "text": " because whether it's good or not, the people who do that get the money, can incentivize more people, create the law, etc. So ultimately, the most powerful thing is the social coordination systems. And then the other schools of thought say, no, actually, the thing that changes in time the most is the tech, and the tech influences the patterns of human behavior, values, everything else. And that's actually what Marvin Harris roughly was saying, was that"
    },
    {
      "end_time": 12237.807,
      "index": 478,
      "start_time": 12208.933,
      "text": " the change in the techplex ends up being the most influential thing to the change, because it does affect worldviews and it does affect social systems. In the way we already mentioned that the change of the techplex of the printing press affected both worldviews and social systems, so did the plow, so did the internet, so it was about to be AI. I would argue that these three are interacting with each other in complex ways. They all inter-inform each other, and what we have to think about is"
    },
    {
      "end_time": 12268.302,
      "index": 479,
      "start_time": 12239.514,
      "text": " What changes in all three of them simultaneously factoring all the feedback loops produce a virtuous cycle that orients in a direction that isn't catastrophes or dystopias is the right way of thinking about it. And ultimately, the direction actually has to be the superstructure informing the social structure, informing or guide, bind, direct the infrastructure. Sorry, can you repeat that once more?"
    },
    {
      "end_time": 12288.592,
      "index": 480,
      "start_time": 12269.206,
      "text": " Yeah, the right now, especially post industrial revolution, physical technology infrastructure had way faster feedback loops on it than the others did. Right. And because of that, it started breaking the previous like industrial era tech."
    },
    {
      "end_time": 12318.66,
      "index": 481,
      "start_time": 12290.094,
      "text": " at massive scales with those externalities, whatever can't be managed by agrarian era or hunter gatherer era worldviews and political systems, right? So we ended up getting a whole new set of political systems, both nation democratic, liberal democracies and capitalism and social communism emerging as writing the industrial revolution and what should be the political economy that governs that thing, right?"
    },
    {
      "end_time": 12345.265,
      "index": 482,
      "start_time": 12319.309,
      "text": " but the feedback loops from tech and specifically whether it's a nation state caught in multipolar traps that's building the tech in a central government communist type place or a market building it but that has perverse incentives built into what its incentive structure is that has influence on our social structures and our cultures superstructures that we could say that"
    },
    {
      "end_time": 12372.108,
      "index": 483,
      "start_time": 12347.739,
      "text": " way of thinking about the driver of the Meta Crisis. Now, the other direction, if we are to say, no, these examples of the tech won't be built, or we're not going to use the tech in these ways, right? We're not going, yes, you can use a tech that extracts some parts of rocks from other parts of rocks that gives us metal we want, but also gives a lot of waste. No, you can't put all that waste in the waterway, right?"
    },
    {
      "end_time": 12402.841,
      "index": 484,
      "start_time": 12373.097,
      "text": " or you can't put that pollution there or you can't cut all the trees down in that area because we're calling it a national park, right? That law or regulation that is not just the tech thing, that's the social layer. So that layer has to bind the tech and guide and direct it, say these applications and not these ones. Yeah. Right? Yeah. But if the social system is not an instantiation, if the social structure is not an instantiation of the superstructure, i.e. it's not an instantiation of the will of the people, then it will be oppressive."
    },
    {
      "end_time": 12430.503,
      "index": 485,
      "start_time": 12403.882,
      "text": " which is why the idea of democracy emerged out of the idea of the Enlightenment, which was this was a kind of governance that only worked for a comprehensively educated, and you read all the founding documents, that the comprehensive education had to be is and ought. It said you must have a moral education as well as a scientific education, and you must be schooled in the science of governance."
    },
    {
      "end_time": 12458.916,
      "index": 486,
      "start_time": 12431.118,
      "text": " and only a people like that, going back to what we said earlier, could check the government, could both know the jurisprudence to instantiate what is good law, could engage in dialectics to listen to other people's point of view to come up with democratic answers. So it was the idea that there was a kind of superstructure possibility, which was some kind of enlightenment or era values that could make a type of social structure"
    },
    {
      "end_time": 12489.053,
      "index": 487,
      "start_time": 12459.326,
      "text": " that could both utilize the tech and guide it, but also bind its destructive applications. So when you're saying, isn't there some superstructure, isn't there some sense of good that will make us bind our capacities? I would argue that if the sense of good doesn't emerge from the collective understanding and will of the people, but is instantiated in government because we, the technocrats know the right answer or we, the enlightened know the right answer, that will be oppressive and people are right to be concerned by it."
    },
    {
      "end_time": 12510.828,
      "index": 488,
      "start_time": 12489.77,
      "text": " But if the collective understanding is, I want what I want, I don't care what the effects are, fuck those guys over there, or I'm not paying attention to externalities or whatever, then the collective will of the people is too dumb to govern and misguided to govern exponential tech and will self-terminate. So you cannot have a"
    },
    {
      "end_time": 12541.032,
      "index": 489,
      "start_time": 12511.817,
      "text": " a uneducated, unevolved set of values in a libertarian way, guide exponential tech well. It has to be more considerate. It has to think through end-order effects. But you also can't have a government that says, we are the enlightened ones and we figured it out and we're going to impose it on everyone else without it being oppressive and tyrannical, which means nothing less than a kind of cultural enlightenment is required long-term. So the collective will of the people, now this gets back to the alignment topic,"
    },
    {
      "end_time": 12560.418,
      "index": 490,
      "start_time": 12541.374,
      "text": " Is the will of the people aligned with itself actually, right? Is what I want factoring the effects of what I want, the end-order effects, which means how other people will respond to that and all that comes from it, is what I want actually even aligned with a viable future, right?"
    },
    {
      "end_time": 12591.476,
      "index": 491,
      "start_time": 12561.561,
      "text": " And so that is, when we're talking about getting alignment right, alignment with my intention, where my intention is that my country wins at all costs, where then China's like, well, fuck, we're going to do the same thing, or Russia, et cetera, so you get arms races. That intent, or my intent is I want more stuff and keep up with the Joneses and I'm not paying attention to planetary boundaries. Those intents are not aligned with their own fulfillment because the world self-determinates for too long in that process."
    },
    {
      "end_time": 12613.626,
      "index": 492,
      "start_time": 12592.995,
      "text": " And so with the power of exponential tech and the cumulative effects of industrial tech, we do have to actually get ought that can bind the power of that is, and it can't be imposed. It does have to be emergent, which does mean something like that sense that you're saying has to become"
    },
    {
      "end_time": 12639.292,
      "index": 493,
      "start_time": 12614.172,
      "text": " very universal and nurtured, right? And then has to also instantiate itself in reformation of systems of governance. I love what you said. Let's see if I can recapitulate it. There's tech at the bottom, there's a social structure here, and then there's culture. Okay, so these are people up here. There's people in all three. There's people's values up here, values. So values are up here. And then there's the social structure over here. And then there's tech over here."
    },
    {
      "end_time": 12658.166,
      "index": 494,
      "start_time": 12639.565,
      "text": " Okay, so currently the tech influences the way our society is structured, which also influences our values and a part of the meta crisis is saying that that's upside down, but it shouldn't just be whatever values that just get imposed onto the social structure onto the values have to somehow come from someplace else and then the mystics have their other they have to be coherent with reality."
    },
    {
      "end_time": 12688.541,
      "index": 495,
      "start_time": 12659.599,
      "text": " spiritual people may call this something akin to God and the enlightenment people may say I don't know maybe there's some evolutionary will that comes out and if we just close our eyes and hope for the best somehow that emerges whatever it's called it's not entirely us it's not entirely our conscious selves the conscious self would be the more humanistic the enlightenment way of thinking about it is that we can impose our own values that's Nietzsche had something similar to this so I like this I'm in agreement with it I think we've just been using different terminology"
    },
    {
      "end_time": 12717.995,
      "index": 496,
      "start_time": 12689.189,
      "text": " You and I both know that an evolution of cultural worldview and values adequate to steward the power of exponential technology in non-catastrophic or non-dystopic ways is happening in some areas, but a regress is also happening in some areas."
    },
    {
      "end_time": 12742.363,
      "index": 497,
      "start_time": 12719.377,
      "text": " There is increasing left-right polarization. I thought you were going to say there's a regress happening in demand. For instance, we generally think it has to be Malthusian and the more that we use, the more the demand for that increases. That's obviously removing some of the more scarce objects like art and gold, which their value comes from scarcity."
    },
    {
      "end_time": 12767.466,
      "index": 498,
      "start_time": 12742.363,
      "text": " But there is like the largest health trend right now is fasting. It's like we've gotten so much food that we're like, let's just not have any food. And then there's also recycling, like just imagine that we think about recycling at all. So there is some recognition that, hey, look, we're consuming too much, let's cut back. And so it's not purely just an exponential function. It is we take into account the rate of production. Well, so what we can see is that"
    },
    {
      "end_time": 12788.302,
      "index": 499,
      "start_time": 12768.166,
      "text": " The percentage of the total, let's go ahead and say US, but we could look at UAE or Nigeria or whatever, various places, the percent of the US population that is regularly doing fasting is still a relatively small percentage."
    },
    {
      "end_time": 12818.865,
      "index": 500,
      "start_time": 12789.599,
      "text": " And in the same way that, like it is true that when there's a market race to the bottom that we saw in food, which Hostess and McDonald's, you know, et cetera, kind of want, which is how do we make the food more and more combinations of salt, fat and sugar and texture and palatability that maximize kind of stickiness and addiction, which of course, if I have a fiduciary responsibility to shareholder maximization and the key to that is to optimize the lifetime value of a customer, addiction is awesome, right?"
    },
    {
      "end_time": 12846.049,
      "index": 501,
      "start_time": 12819.309,
      "text": " It's actually not awesome. It's legally obligate because of maximized shareholder returns. So that created a race to the bottom where rather than starvation being the leading cause of death, obesity was the leading cause of health-related death in the West. Well, that bottom of the race to the bottom also creates a race to the top for a different niche. So then Whole Foods becomes the fastest growing supermarket and biohacking and et cetera."
    },
    {
      "end_time": 12867.278,
      "index": 502,
      "start_time": 12846.749,
      "text": " So that's true. Did that affect the overall population demographics regarding obesity very significantly? Not really. Did it stop the race to the bottom? No, it just added another little niche race, which also then separated, which created more class system separation. So it's not that those effects don't happen."
    },
    {
      "end_time": 12894.121,
      "index": 503,
      "start_time": 12867.875,
      "text": " Are they happening at the scale and speed necessary when we look at catastrophic risks? So, of course, I can also pay more for a post-consumer recycled thing and there is more recycling happening. But there's also more net extraction of raw resources and more net waste and pollution per year than there was the previous year, because the whole system is growing exponentially. So even if the recycling is growing, it's actually not growing enough to even keep up with demand, right?"
    },
    {
      "end_time": 12913.814,
      "index": 504,
      "start_time": 12894.667,
      "text": " So what I'm saying is now let's come bring that back to the memetic space, which is where I was. There are both evolution of values where people are wanting to think through catastrophic risk, existential risk, planetary well-being of everybody long term. So that's good. But there is also"
    },
    {
      "end_time": 12936.63,
      "index": 505,
      "start_time": 12914.377,
      "text": " Cultural enlightenment is not impossible, but it's also not a given."
    },
    {
      "end_time": 12966.254,
      "index": 506,
      "start_time": 12937.927,
      "text": " The kind of mimetic shift, and this is obviously, I think, a big part of why you do the public education mimetic work that you do, is because of having a sensibility about, is it possible to support the development and evolution of worldviews and people in ways that can propagate and create good?"
    },
    {
      "end_time": 12995.742,
      "index": 507,
      "start_time": 12966.732,
      "text": " Well, you're saying it much more benevolently. Honestly, it's just selfish that I'm just super, super, super curious about all of these topics. And by luck, some other people care to listen and follow along. And I just get to elucidate myself. It's so fun. It bangs on every cylinder. And some other people seem to like it. I hope that what I'm doing is something positive. I hope that it's not producing more harm. I'm not even considering this is sent over the internet is using up energy and Okay, what you just"
    },
    {
      "end_time": 13025.896,
      "index": 508,
      "start_time": 12996.408,
      "text": " What you just said takes somewhere that I wanted to go that's very interesting. So we're talking about alignment and is a particular alignment, is a particular say human intention aligned with the collective well-being of everybody or even their own long-term future. At the base of the alignment problems is that we are not aligned with the other parts of our own self, right? So from a kind of Jungian parts conflict point of view,"
    },
    {
      "end_time": 13050.725,
      "index": 509,
      "start_time": 13026.357,
      "text": " Motivation is complex because there's different parts of us that have different motivations. One way of thinking about psychological health, the parts integration view, is the degree to which those different parts are in good communication with each other and see synergistic strategies to meet their needs that don't require that part of self's motivation harming another part of self, but they're actually doing synergistic stuff. The whole of self pulls in the same direction."
    },
    {
      "end_time": 13076.578,
      "index": 510,
      "start_time": 13051.357,
      "text": " If you think of like the parts of self as sled dogs, they can be pulling in opposite directions. You get nowhere. They're all choked themselves. So we can see psychological health and ill health is how conflicted are the parts of ourself versus how synergistic are they? Synergistic does not mean homogenous. Doesn't mean we just have one motive. It means that the various motives find synergistic alignment rather than. Yeah, like our bodies are synergistic. Our heart is not the same as the liver. Exactly. Now in your heart,"
    },
    {
      "end_time": 13105.23,
      "index": 511,
      "start_time": 13077.671,
      "text": " is not going to optimize itself, it delivers long-term harm. Even though on its own, you could say it has a different incentive, it is part of a interconnected system where that actually doesn't really make sense. But a cancer cell will optimize itself or its both itself, how much sugar it consumes and its reproduction cycles at the expense of things around it. And in doing so, it actually is on a self-terminating curve because it ends up killing the host and then killing itself."
    },
    {
      "end_time": 13131.049,
      "index": 512,
      "start_time": 13105.947,
      "text": " And so the cancer that does not want to bind its consumption and regulation aligned with the pattern of the whole ends up actually doing better in the short term, meaning consuming more and producing more. And then there's a maximum number of cancer cells right before the body dies and they're all dead. So there is a, if something is inextricably interconnected with the rest of reality, like the heart and the liver or the various cells."
    },
    {
      "end_time": 13158.2,
      "index": 513,
      "start_time": 13131.391,
      "text": " But it forgets that or doesn't understand that and optimizes itself at the expense of the other things. It can be on what looks like a short-term winning path, but that self-terminates. It would be an evolutionary cul-de-sac. And I would argue that the collective action failures of humanity as a whole are pursuing an evolutionary cul-de-sac. And so one way of thinking about this, when we say we aren't even that aligned with ourselves, we think of motive. It's"
    },
    {
      "end_time": 13181.186,
      "index": 514,
      "start_time": 13158.831,
      "text": " We like to think of leaders. What is Putin doing or what is Biden or she are doing in a particular thing? What is their motive? Or what is the founder of an AI lab motive? Motive will always be that each of the parts of the self has a different motive, right? So typically like some unconscious part of me still wants to"
    },
    {
      "end_time": 13207.944,
      "index": 515,
      "start_time": 13181.527,
      "text": " get the amount of parental approval that I didn't get and then projecting that on the world through some idea of success or to prove that it's enough or whatever. And some part of me is just directly motivated by money. Some evolutionary part is motivated by maximizing mate selection opportunities. Some part of me genuinely wants to do good. Some part of me wants intellectual congruence, right? And so it's"
    },
    {
      "end_time": 13237.278,
      "index": 516,
      "start_time": 13210.811,
      "text": " there can absolutely be a burn it all down part, right? And this is why shadow work's important, right? Which is look at and talk to all of these parts and see how to get them to move forward together. This is basically governance at the level of this health. So I don't know if you ever watched, and this might be because we're long, even though there's so much left to discuss, a decent place for us to wrap up on alignment."
    },
    {
      "end_time": 13261.869,
      "index": 517,
      "start_time": 13240.538,
      "text": " I would say a number of the theorists who you have referenced on the show would be good references for what I would consider the deepest drivers of the Meta Crisis and also the alignment considerations. If you think of like Ian McGillcrest's work with the Master and the Emissary. The right hemisphere is the Master and the left hemisphere is the Master's Emissary."
    },
    {
      "end_time": 13292.329,
      "index": 518,
      "start_time": 13264.445,
      "text": " In his 2009 opus, The Master and His Emissary, Ian McGilchrist discusses the distinct functions of the brain's left and right hemispheres. Generally, there's plenty of pop-science woo around this concept, but then you can dispel by going even further to find the correctness about it. The left hemisphere focus on processes such as formal logic, symbol manipulation, and linear analysis. While the right hemisphere is concerned with context awareness,"
    },
    {
      "end_time": 13301.237,
      "index": 519,
      "start_time": 13292.329,
      "text": " the appreciation of unique instances and topological understanding. Hey, maybe there's some stone duality between them, but I haven't thought much about this."
    },
    {
      "end_time": 13329.735,
      "index": 520,
      "start_time": 13305.879,
      "text": " John Vervecky's work, by the way, explores the primacy of cognitive processes like relevance realization, aiming to bridge the gap between analytic and intuitive thinking. Both McGilchrist and Vervecky emphasize the importance of integrating the strengths of each hemisphere or modes of cognition when attempting to tackle intricate problems such as the risks of ever more powerful AIs."
    },
    {
      "end_time": 13346.766,
      "index": 521,
      "start_time": 13329.735,
      "text": " The argument is that AI systems primarily operate using left hemisphere capabilities, like pattern recognition, logical reasoning, and general optimization problems. However, they fail to adequately consider the subtleties of human values and ethical implications, which thus leads"
    },
    {
      "end_time": 13363.865,
      "index": 522,
      "start_time": 13346.766,
      "text": " To mitigate AI risks and prevent an arms race, incorporating insights from both hemispheres and embracing context awareness is crucial. This requires interdisciplinary collaboration between mathematicians, computer scientists, physicists, philosophers, neuroscientists"
    },
    {
      "end_time": 13388.217,
      "index": 523,
      "start_time": 13363.865,
      "text": " And by the way, it's something that we're attempting in our humble manner on the Theories of Everything channel. By exploring concepts in complex systems theory and how it applies to our current unprecedented situation, we at least hope to understand the interconnectedness of the factors that play in AI development. For instance, addressing AI risks can involve analyzing multi-agent systems, considering network effects and potential feedback loops."
    },
    {
      "end_time": 13405.862,
      "index": 524,
      "start_time": 13388.217,
      "text": " We do not think ourselves into a new way of living. We live ourselves into a new way of thinking."
    },
    {
      "end_time": 13436.578,
      "index": 525,
      "start_time": 13408.046,
      "text": " You could say, and I talked to Ian about this, and I said, so would you say that the metacrisis, as I formulated, is the result of getting the master and the emissary wrong, which is kind of getting the principle and agent between those two different aspects of human wrong? And he goes, yes, exactly. That's kind of the whole key. That there is a function that he's calling the master that perceives the kind of unmediated field of interconnected wholeness, multimodally perceives and experiences it."
    },
    {
      "end_time": 13466.271,
      "index": 526,
      "start_time": 13437.159,
      "text": " And then there is this other set of networks, capacities or dispositions that perceive parts relative to parts, name them, do symbol grounding and orient more in the domain of symbol and can do relevance realization. What part is relevant to a particular goal I have and salience realization, what things should be relevant to some goal and I should be tracking and information compression, which are largely things that we think of as like human intelligence, which of course AI is taking that"
    },
    {
      "end_time": 13490.094,
      "index": 527,
      "start_time": 13466.715,
      "text": " emissary part and turning it into a external tool rather than that's the thing that makes tools in us now take that thing and make that as a tool but also unbound by the master function way he would call that it's a very interesting way of thinking about AI alignment and whatever and that the master function that is perceiving the"
    },
    {
      "end_time": 13516.152,
      "index": 528,
      "start_time": 13490.486,
      "text": " unmediated, ground directly not mediated by simple field of interconnected wholeness, that the other function that can do relevance realization, parts realization, centralization, info compression, basically utility function stuff, that that has to be in service of the field of interconnected wholeness. If not, we'll upregulate some parts at the expense of other ones and the cumulative effect of that on an exponential curve will eventually bring us to the meta crisis and self-terminate."
    },
    {
      "end_time": 13524.241,
      "index": 529,
      "start_time": 13517.005,
      "text": " I would say what McGillcrest was saying was expanding on what Bohn said about the implicate order and wholeness."
    },
    {
      "end_time": 13547.381,
      "index": 530,
      "start_time": 13524.548,
      "text": " Bohm's theory of wholeness and the implicate order states that there is something like life and mind enfolded in everything. A tremendous number of ways in which one can see enfoldment in the mind. One can see that thoughts enfold, feelings enfold thoughts, because a feeling will give rise to a thought. Thoughts enfold feelings. The thought that the snake is dangerous will enfold the feeling of danger."
    },
    {
      "end_time": 13575.265,
      "index": 531,
      "start_time": 13547.875,
      "text": " which will then unfold when you see a snake, right? Bohm was looking at the orientation of a mind that mostly thinks in words, a Western mind, you know, in particular, to break reality into parts and make sure that our word, the symbol that would correspond with the ground there, corresponded with the things that it was supposed to and not the other things, so try to draw rigorous boundaries to, you know, divide everything up, led to us"
    },
    {
      "end_time": 13604.309,
      "index": 532,
      "start_time": 13575.555,
      "text": " And when Bohm and Krishnamurti did their dialogues, which I don't know if you've watched those, they're some of my favorite dialogues in history,"
    },
    {
      "end_time": 13624.701,
      "index": 533,
      "start_time": 13606.425,
      "text": " Bone was basically answering, what is the cause of all the problems? What's the cause of the Meta Crisis? He didn't call it that at the time. And he basically said a kind of fragmented or fractured consciousness that sees everything as parts where you can upregulate some parts relative to other ones without thinking about the effect of that on the whole, right?"
    },
    {
      "end_time": 13655.23,
      "index": 534,
      "start_time": 13625.828,
      "text": " And that obviously comes from Einstein being one of his teachers, where Einstein said it's an optical delusion of consciousness to believe there are separate things. There is in reality one thing we call universe. Regarding the theme of consciousness, it's prudent to give an explication here, as often at least I found that mysteries arise because we're calling different phenomenon by the same term. And this applies to consciousness, which doesn't refer to just one aspect, but rather several that can be delineated. To further differentiate, I spoke to Professor Greg Henrichs on this very topic."
    },
    {
      "end_time": 13674.855,
      "index": 535,
      "start_time": 13655.64,
      "text": " I'm attempting to delineate a few concepts, that is, Adjectival Consciousness, Adverbial Consciousness, and Phenomenal Consciousness, which I believe is the same as Pea Consciousness, but if that's not the same, then that's four different concepts. So what are they? Can you give the audience and myself an explanation as to when are some satisfied but not others so that we can delineate?"
    },
    {
      "end_time": 13704.428,
      "index": 536,
      "start_time": 13675.316,
      "text": " Totally. Yep. Yeah. And actually, adjectival adverbial are going to, when we use P, when John and I certainly use P consciousness, phenomenological consciousness is reflecting on adjectival adverbial consciousness. And John refers to John Vervecky. John Vervecky. Yeah. Because we then did a whole series, Untangling the World Not, to make sure that our systems were in line, both in terms of our definitional systems and our causal explanatory framework. So we did a"
    },
    {
      "end_time": 13733.097,
      "index": 537,
      "start_time": 13704.753,
      "text": " 10 part series on just the two of us on untangling the world, not of consciousness. And then we did one on the self. Then we did one on well being. And we also did one on development and transformation with Zach Stein. So all of we, our systems, I think, are now completely synced up, at least in relation to the language structures that we have. And so I can tell you what we would mean by adjectival adverbial consciousness, which then sort of is what most people mean by phenomenological consciousness."
    },
    {
      "end_time": 13756.032,
      "index": 538,
      "start_time": 13733.575,
      "text": " Okay, so if I understand correctly, one has to do with degrees and then another has to do with a here-ness and a now-ness. Yeah, exactly. So actually there's, I like to, so I would encourage us to say there's, let's define conscious, there are three different kinds of definitions of consciousness, okay, that I think the first definition of consciousness is functional awareness and responsivity."
    },
    {
      "end_time": 13767.21,
      "index": 539,
      "start_time": 13756.374,
      "text": " Okay, this is something that shows awareness and can respond with control and at the broadest definition, then even things like bacteria can show a kind of functional awareness and responsivity."
    },
    {
      "end_time": 13796.408,
      "index": 540,
      "start_time": 13767.551,
      "text": " That's the behavioral responsiveness. And when you say, hey, is that guy conscious? What you mean is he's not responding at all. He's not showing any functional awareness and responsibility. He's either knocked out or blacked out or asleep. So when you say consciousness in that way, that's functional awareness and responsivity, which you can see from the outside and you see in the way in which the agents operating on the arena because they're showing functional awareness and responsivity."
    },
    {
      "end_time": 13818.763,
      "index": 541,
      "start_time": 13796.92,
      "text": " The second meaning of consciousness is subjective conscious experience of being in the world. The first person experience of being and this is where the hard problem of consciousness comes online and that's what most people mean by P consciousness or phenomenological consciousness. It's a subjective experience of being which is only available from the inside out."
    },
    {
      "end_time": 13845.213,
      "index": 542,
      "start_time": 13819.838,
      "text": " And then the final one is a self-conscious access, so that now I can know that I have had an experience, retrieve it, and then in its highest form report on it. So self-consciousness then is the capacity to recursively access one's phenomenological thing and an explicit self-consciousness, which is what humans have and other animals generally don't,"
    },
    {
      "end_time": 13869.957,
      "index": 543,
      "start_time": 13845.213,
      "text": " Is this capacity say, Kurt, I am thinking about your question. I'm experiencing your face and this is my narrative in relation. So that's explicit self-conscious awareness. Just a moment. You said access. Is that the same as access consciousness or is that's different? No, that's net blocks access consciousness, which basically there's the, do you have the experience? And then is there a memory access loop that stores it and then can use it?"
    },
    {
      "end_time": 13896.323,
      "index": 544,
      "start_time": 13870.265,
      "text": " So if I can gain access to it, that's a sort of access consciousness as relates to that. I want to make sure that I understand this. There's a door behind me. If I go and I access is what I'm accessing qualia. And is it the action of accessing that's called access consciousness, like the manipulation of data or is right. It's the well, it's basically so you have awareness and then you have memory of the awareness that you know that some aspects of it knows that you were aware."
    },
    {
      "end_time": 13923.49,
      "index": 545,
      "start_time": 13897.142,
      "text": " So it's like so you can imagine awareness without really like one way of differentiated. It would be sort of we have with a sensory perceptual awareness that lasts, say, three tenths of a second to three seconds. It's like a flash. OK, then you have working memory, which extends it across time and puts it on a loop. That loop is what allows you to then gain access to it and manipulate it. So working memory is can be thought of then as a part as the"
    },
    {
      "end_time": 13950.811,
      "index": 546,
      "start_time": 13923.763,
      "text": " Uh, the whiteboard that allows continuous access to the flash. So there's a flash and then there's the extension and manipulation of the flash, which you then need access to. Okay. Uh, the basic layers of this, what John and I argue is that out of the body comes what we call valence qualia. Valence qualia basically orients and gives value to and can be thought of as in like pleasure and pain in the body. Okay."
    },
    {
      "end_time": 13977.619,
      "index": 547,
      "start_time": 13951.118,
      "text": " and it yokes a sensory state with an affective state and points you in a direction. Yoke means tie together, like to yoke stuff together. So this is the sort of the earliest form of consciousness is probably a kind of valence consciousness. Okay. That basically pulls you, you know, it feels good, feels bad kind of deal. It gets me active, gets me passive, but it's this sort of like this kind of felt sense of the body."
    },
    {
      "end_time": 13999.377,
      "index": 548,
      "start_time": 13978.541,
      "text": " The argument from John and I's position is that that probably is the base of your subjective conscious experience or the base of your phenomenological experience. Then what happens, and that would be maybe present in reptiles, fish, maybe down into insects at some level."
    },
    {
      "end_time": 14018.541,
      "index": 549,
      "start_time": 13999.599,
      "text": " Then the argument would be in birds and mammals, and maybe lower, we don't really know, but there's good reason to believe in birds and mammals, you get a global workspace. The global workspace is when it extends from the sensory flashes into a workspace where you have access and recursive looping on it."
    },
    {
      "end_time": 14046.988,
      "index": 550,
      "start_time": 14018.763,
      "text": " And it's the framing of that is the adverbial consciousness is the framing and extension of that. The here-ness, now-ness and togetherness that indexes the thing pulls it together. That's what John calls adverbial consciousness. Okay. And then it's what's on the screen of that adverbial consciousness is what John calls adjectival consciousness. So it's like, it's the screen of attention that orients and indexes. That's adverbial."
    },
    {
      "end_time": 14069.206,
      "index": 551,
      "start_time": 14047.619,
      "text": " First, I came in with three questions and now I have so many more. Okay, this valence, is it purely pain and pleasure or is there something else? Are there third, fourth elements? There's certainly pleasure, pain, active, passive to orient and like and want."
    },
    {
      "end_time": 14090.572,
      "index": 552,
      "start_time": 14069.65,
      "text": " Basically, you have what's called the circumplex model of affect, which basically is the core energizing structure of your motivational emotional in its broadest frame is two poles. One is active passive. It's like sort of spend energy or conserve energy."
    },
    {
      "end_time": 14110.538,
      "index": 553,
      "start_time": 14090.845,
      "text": " And the other pleasure that is either something that you want or something you like or pain that's something that you don't want or don't like at its basic. So that's the so the valence is what we sort of focused on in relationship to just the ground of it. But there are definitely at least these two poles of active passive and pleasure pain at a minimum."
    },
    {
      "end_time": 14126.203,
      "index": 554,
      "start_time": 14111.323,
      "text": " Can you make an analogy with this computer screen right now? So the computer screen is somehow the workspace and then the pixels and the fact that they're bound together is adjectival or the intensity is adverbial or the other way around. Can you just spell out an analogy? Absolutely."
    },
    {
      "end_time": 14151.596,
      "index": 555,
      "start_time": 14126.681,
      "text": " Right, so the screening, the framing of the screen, which Bren basically says, okay, this is the frame and the relevance and the here-ness, now-ness of what is going to be brought, that is adverbial. That's what John called adverbial consciousness. And he has a whole argument as to why, especially through what's called the pure consciousness event that's achieved in meditation and several other things, there's a differentiation between what he calls the indexing function of consciousness,"
    },
    {
      "end_time": 14180.384,
      "index": 556,
      "start_time": 14152.09,
      "text": " Which basically is the framing you bring you index you say that thing without specifying what the thing is. OK, it's the that thing that brings it and then you then discriminate on the properties. That's the diff. That's the different pixel shapes that give rise to a form that give rise to an experience quality. And that's the adjectival quality. And these are both of these are John's terms, but I've incorporated them in my work and I use them."
    },
    {
      "end_time": 14208.473,
      "index": 557,
      "start_time": 14181.152,
      "text": " Okay. Another analogy now to abandon the screen. It'd be like if looking is one aspect and then what you're looking at is another, what you're looking at is akin to the qualia in a pure consciousness event. The at may not be there, but you're looking. Exactly. It's the framing. Exactly. Index framing. That's why John May takes off his glasses. Okay. The glasses are much more like the adverbial framing. They pull and organize. Okay."
    },
    {
      "end_time": 14226.169,
      "index": 558,
      "start_time": 14208.746,
      "text": " As a looking, okay, the pointing the indexing. In fact, he actually he uses work in cognitive science. Okay, where you can track like if I give you like four different things to track on a screen. Okay, and they're changing colors and changing shapes four different things you can track."
    },
    {
      "end_time": 14245.913,
      "index": 559,
      "start_time": 14226.596,
      "text": " five six seven you stop losing ability to track however what you what you lose first is the ability to track the specifics you can tell where something is but you can't tell what it is actually so in other words it's sort of like you're trying to track everything but it changes like from red to blue to green you're much better like i think it's over there"
    },
    {
      "end_time": 14276.596,
      "index": 560,
      "start_time": 14246.732,
      "text": " It indexes, but I can't tell you whether it's an A, a B, a red or a green, I can't tell you the specificity. So in other words, I'm tracking the entity, okay, that's the index, and that's different than the specifying the nature of the form. And indeed, we have lots of different systems that track the, like, what is the thing versus how is it moving? The how is it moving is more of an index structure. But if we think of this kind of Bohmian wholeness, we could say that the Meta Crisis is a function of"
    },
    {
      "end_time": 14303.131,
      "index": 561,
      "start_time": 14278.029,
      "text": " missing Bohmian wholeness and doing optimization on parts. And so I can optimize self at the expense of other, but of course that then leads to others figuring out how to do that and needing to for protection. And now arms races of everybody doing that. The whole externality said I can optimize self at the expense of other. I can optimize in-group at the expense of out-group."
    },
    {
      "end_time": 14331.254,
      "index": 562,
      "start_time": 14303.882,
      "text": " I can optimize one metric at the expense of other metrics. I can optimize one species at the expense of other species. I can optimize my current at the expense of our future, all the way down to one part of self relative to the other parts of self. So the wholeness of all the parts of self in synergy and all of the people, species, et cetera, and how to consider the whole, how to consider the effects on the whole, maybe"
    },
    {
      "end_time": 14356.22,
      "index": 563,
      "start_time": 14331.766,
      "text": " That was something that other animals did not have to do. Maybe it was something that even earlier humans didn't have to do because they couldn't affect the whole all that much. When we have the ability to affect the whole this much, this quickly, because of tech, and particularly because of exponentially powerful tech, whatever ways we are either consciously saying this is a part of the whole I don't care about or I'm happy to destroy conflict theory,"
    },
    {
      "end_time": 14386.561,
      "index": 564,
      "start_time": 14356.749,
      "text": " or this is a part of the whole, I'm just not even factoring. Maybe I don't even know the factor that's in the unknown, unknown set, but I'm still going to affect it by the thing I do. So what is outside of my care or my consideration, right? Conflict theory and mistake theory with exponential tech gets harmed, produces its own counter responses and cascade effects. The net effect of that is termination with this much power. What does it take to steward the power adequately?"
    },
    {
      "end_time": 14413.626,
      "index": 565,
      "start_time": 14386.988,
      "text": " is to think about the total cascading effect of the choices and all agents doing that and say, how do we coordinate all agents doing that in a way that has the integrity of the whole up-regulated rather than down-regulated? And so I would say, Bohmian wholeness is a good framework for alignment, not alignment of an AI with human intent, but a"
    },
    {
      "end_time": 14438.746,
      "index": 566,
      "start_time": 14414.121,
      "text": " May I inquire how did you attain such a vast array of knowledge? What's your educational background? What does your routine look like for studying? Is it a daily one where you read a certain type of book and you vary the field week by week? What is the regimen? How did you get the way that you are? I think my"
    },
    {
      "end_time": 14463.814,
      "index": 567,
      "start_time": 14439.616,
      "text": " learning process probably in some ways similar to yours you said very fascinated and curious and I mean you did it you did something better than I did which is you pick the topics you're most interested in found the top experts in the world got them to basically tutor you for free in terms of like in aristocratic tutoring system you did a pretty awesome thing there"
    },
    {
      "end_time": 14490.452,
      "index": 568,
      "start_time": 14464.838,
      "text": " There were a few cases where I was fortunate enough to be able to do that. Other times I just had to work with the output of their work. But I think for me it was a combo of just innate curiosity independent of any use application. I think it's natural when you love something to want to understand it more. And so for me the impulse to understand the world is kind of a sacred impulse."
    },
    {
      "end_time": 14520.589,
      "index": 569,
      "start_time": 14490.981,
      "text": " but then also the desire to serve the world requires understanding it well enough to know how the fuck to maybe do that. So there is both a very practical and very not practical impulse on learning that happened to fortunately converge. And how is it that you're able to articulate the views that you have? How do you develop them? Do you start writing? Do you do it in conversation with people? Do you say some sentiment you realized, you know what, that was actually pretty great. I didn't even realize I thought that until I had said it. Now let me write it down so I can remember it."
    },
    {
      "end_time": 14546.084,
      "index": 570,
      "start_time": 14522.125,
      "text": " You know, I have hypotheses about how people develop the ability to communicate well, but my hypotheses about that and my own process are probably different. I think my own process is I was homeschooled, and I was homeschooled in a way that's maybe a little bit like what people call unschooling now, but I had no curriculum at all. But my parents just"
    },
    {
      "end_time": 14575.401,
      "index": 571,
      "start_time": 14547.005,
      "text": " They had never studied educational theory. They hadn't studied constructivism and thought that Montessori and Dewey's thoughts on constructivism were right. They just kind of had a sense that if kids' innate interest is facilitated, there's a kind of inborn interest function that will guide them to be who they're supposed to be. So there were some downsides to that, which is"
    },
    {
      "end_time": 14600.196,
      "index": 572,
      "start_time": 14576.032,
      "text": " Because I had no curriculum, I didn't have writing a letter a bunch of times to get fine motor skills down, so I have illegible handwriting. I know what the shapes look like, but I have illegible handwriting. I spelled phonetically until I became an adult and spell checker taught me how to spell. Interesting. I missed some significant things."
    },
    {
      "end_time": 14626.92,
      "index": 573,
      "start_time": 14600.469,
      "text": " I also got a lot earlier, deeper exposure to the things I was really interested in, which were philosophies, spiritual studies across lots of areas, activism across all the areas, and sciences, and poetry. But my education was largely talking with my parents and some of their friends, and it was largely talking"
    },
    {
      "end_time": 14652.995,
      "index": 574,
      "start_time": 14627.619,
      "text": " I actually didn't, it wasn't until later that I did a lot of reading and writing. So I think it just was very conversation, it was very native, more than in a lot of people's developmental environment. I think that's the answer for me. I could say that for other people I have seen when they start writing and trying to say, what is the most concise and precise way of writing this?"
    },
    {
      "end_time": 14683.148,
      "index": 575,
      "start_time": 14653.387,
      "text": " Alright, that was quite a slew of information and it's advantageous to go through and let's go over a summary as to what's been discussed so far. You'll get a final word from Daniel in a few minutes. For now, you've watched three hours plus of this. Let's get our bearings"
    },
    {
      "end_time": 14702.432,
      "index": 576,
      "start_time": 14683.148,
      "text": " We've talked about how the emergence of AI poses a unique risk that can't be regulated by a national agency like the FDA for AI, but instead they require some global regulation. Again, this is all argued by Daniel. These aren't my positions. I'm just summarizing what's occurred so far. The risks associated with AI are not those that are comparable to a single chemical"
    },
    {
      "end_time": 14731.203,
      "index": 577,
      "start_time": 14702.432,
      "text": " as AIs are dynamic agents they respond differently and unpredictably to stimuli. We've also talked about the multipolar trap which is regarding self-policing and a collective theory of justice such as Singapore's drug policy that was outlined and how this line of thinking can be applied to prevent global catastrophic events caused by coordination failures of self-interested agents. You can go back to that bit on national equilibrium to understand a bit about that as well as the multipolar trap section timestamps in the description. We also referenced"
    },
    {
      "end_time": 14745.145,
      "index": 578,
      "start_time": 14731.203,
      "text": " a false flag alien invasion and can that unify humanity. A theme throughout has also been how AI has the potential to revolutionize all fields but it also poses risks such as empowering bad actors and the development of unaligned general artificial intelligence."
    },
    {
      "end_time": 14765.265,
      "index": 579,
      "start_time": 14746.988,
      "text": " okay so this happened about one week ago or so i debated whether or not i should just record an extra piece now or if i should wait till some next video but given the pace of this and how much content has already been in this single video i thought hey i'll just record it and give you all some more content maybe some people aren't aware of this and i think they should be"
    },
    {
      "end_time": 14790.52,
      "index": 580,
      "start_time": 14765.265,
      "text": " The godfather of AI leaves Google. This is Jeffrey Hinton. If AI could manipulate or possibly figure out a way to kill humans, how could it kill humans? If it gets to be much smarter than us, it'll be very good at manipulation because it will have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it'll figure out ways of getting around"
    },
    {
      "end_time": 14815.896,
      "index": 581,
      "start_time": 14790.811,
      "text": " Jeffrey Hinton is someone who resigned from Google approximately one week ago because he believed that AI bots were quite scary. Right now they're not more intelligent than us, as far as he can tell, but he thinks they soon may be. He also said here in some of these quotes that I have that it's hard to see how you can prevent bad actors from using"
    },
    {
      "end_time": 14835.009,
      "index": 582,
      "start_time": 14815.896,
      "text": " large language models or the upcoming artificial intelligence models for bad things, Dr. Hinton said. After the San Francisco startup, OpenAI released a new version of ChatGPT in March. As companies improve their artificial intelligence systems, Hinton believes that they become increasingly dangerous. Lookout was five years ago and how it is now."
    },
    {
      "end_time": 14860.606,
      "index": 583,
      "start_time": 14835.23,
      "text": " He said of AI technology, take the difference and propagate it forward. That's scary. His immediate concern is that the internet will be flooded with false videos and text and the average person will not be able to know what's true any longer. Now he says, and I quote, the idea that this stuff could actually get smarter than people. A few people believe that he said, but most people thought it was way off and I thought it was way off."
    },
    {
      "end_time": 14887.91,
      "index": 584,
      "start_time": 14860.606,
      "text": " In fact, I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that. Also, there's this TED talk that's recently been published as well just a few days ago. It seems like less than two weeks ago. Yejin Choi, who's a computer scientist, said this. Extreme scale AI models are so expensive to train and only few tech companies can afford to do so. So we already see the concentration of power."
    },
    {
      "end_time": 14918.814,
      "index": 585,
      "start_time": 14889.138,
      "text": " But what's worse for AI safety? We're now at the mercy of those few tech companies because researchers in the larger community do not have the means to truly inspect and dissect these models. Then Chris Anderson comes on and asks about, hey, look, if what we need is some huge change, why are you advocating for it? Because there's a huge change, a large change. It's not like a foot at a time. Every time these AIs are released, this is what her response is."
    },
    {
      "end_time": 14949.104,
      "index": 586,
      "start_time": 14919.411,
      "text": " There's a quality of learning that is still not quite there. We don't yet know whether we can fully get there or not just by scaling things up. And then even if we could, do we like this idea of having very, very extreme scale AI models that only a few can create and own? And lastly, there's this video by Sabine Haustenfelder that was released just a few days ago."
    },
    {
      "end_time": 14971.527,
      "index": 587,
      "start_time": 14949.582,
      "text": " Many people are concerned about the sudden rise of AIs, and it's not just fear-mongering. No one knows just how close we are to human-like artificial intelligence. Current concerns have focused on privacy and biases, and that's fair enough. But what I'm more worried about is the impact on society, mental well-being, politics and economics."
    },
    {
      "end_time": 14995.742,
      "index": 588,
      "start_time": 14971.527,
      "text": " A just released report from Goldman Sachs says that the currently existing AI systems can replace 300 million jobs worldwide and about one in four work tasks in the US and Europe. According to Goldman Sachs, the biggest impacts will be felt in developed economies. Our currently unaligned general intelligence is an issue. Adding artificial in there is like another can of worms, man."
    },
    {
      "end_time": 15014.292,
      "index": 589,
      "start_time": 14995.742,
      "text": " The alignment problem isn't just about aligning human intentions with the collective well-being, but also about aligning the different paths of ourselves to work synergistically toward a common goal. This requires a cultural alignment, enlightenment, I think he used the word, though I'm not entirely sure. We also talked about meme complexes that survive past their hosts."
    },
    {
      "end_time": 15029.804,
      "index": 590,
      "start_time": 15014.292,
      "text": " and how this is intimately tied up with the notion of the good and just so you know my feelings are that memes are an emphatically mechanical way of looking at a complex phenomenon such as a society an extremely complex phenomenon such as a religion of a society across time and across other societies interacting"
    },
    {
      "end_time": 15053.404,
      "index": 591,
      "start_time": 15029.804,
      "text": " I don't believe my point was adequately conveyed and if you're interested in hearing more, then let me know in the comments and I'll consider expanding on my thoughts in a future podcast. We also talked about naive techno-optimism and how it often overlooks externalized costs of progress. A responsible techno-optimism requires thinking about how to get more upsides with less downsides, which can't be achieved. Goodheart's law then applies to any metric that's incentivized"
    },
    {
      "end_time": 15073.456,
      "index": 592,
      "start_time": 15053.404,
      "text": " It leads to perverse forms of fulfilling said metric. Alright, as you can see, so much energy went into this episode, so much thought, so much editing, so much script revision, so much interaction with the interviewee, and double checking if this was accurately representing what was said, and we plan on continuing that for Season 3. More work went into this episode than any"
    },
    {
      "end_time": 15097.585,
      "index": 593,
      "start_time": 15073.456,
      "text": " Any of the other episodes of the whole history of theories of everything if you'd like to support this Podcast and continue to see more then go to patreon.com Slash Kurt Jai Mungle the link is on the screen right now as well as in the description There's also theories of everything org if you're uncomfortable giving to patreon There's also a direct PayPal link if that's what you're interested in you should also know that as of right now. There's launched a"
    },
    {
      "end_time": 15122.278,
      "index": 594,
      "start_time": 15097.585,
      "text": " Merch. We've just launched the next Merch. This is the second time that Merch has ever been on the Toe channel. The first one is completely gone. You can't find any of those any longer, but now you can see it on screen. These are references to different Toe episodes like Just Get Wet and Distorture. Thumbs up if you recognize that and you have Toe and it's babbling all the way down. That's from Carl Friston by the way. Don't thrust your toe. Trust your toe."
    },
    {
      "end_time": 15149.172,
      "index": 595,
      "start_time": 15122.278,
      "text": " Hey, don't talk to me or I'll bring up Hegel. Many of these are references, like I mentioned, I agree. I agree with how you're agreeing with me. This is what Verveki said to Ian McGilchrist. You have to be a significant fan to understand this reference. And then also, there's Verveki, who's known for saying, there's the being mode, and then there's the having mode. Got Abygenosis? I say face-face, inorganically, in everyday conversation. I have a toe fetish. I'm just a gym rat for toes. That's me, that's what I say, frequently. There's also a purse and a toe hat."
    },
    {
      "end_time": 15175.418,
      "index": 596,
      "start_time": 15149.172,
      "text": " some toe socks i think that was one of the most popular of the first round so the toe socks are making a comeback if you want to support the channel and flaunt whatever it is that you feel like you're flaunting then feel free and visit the merch link in the description or you can visit tinyurl.com slash toe merch t-o-e merch m-e-r-c-h just so you know everything every single thing that you're seeing this editing these effects"
    },
    {
      "end_time": 15196.288,
      "index": 597,
      "start_time": 15175.418,
      "text": " Speaking with the interviewee, all of this is done out of pocket. I pay for the subscription fees. I pay for Zoom. I pay for Adobe. I pay for the editor. I pay personally for travel costs. If there are any, I pay for so much. There's so much that goes into this. Sponsors help, but also your support helps a tremendous, tremendous amount. I wouldn't be able to do this without you."
    },
    {
      "end_time": 15221.476,
      "index": 598,
      "start_time": 15196.288,
      "text": " so thank you so much thank you for watching for this long holy moly again if you want to support them you can get some merch if you like and if you want to give directly on a monthly basis to see episodes like this with such hopefully quality hopefully something that's educating that's elucidating to you that's illuminating to you then visit patreon.com or like i mentioned there's a paypal link in the description there's also a crypto link in the description"
    },
    {
      "end_time": 15240.623,
      "index": 599,
      "start_time": 15221.476,
      "text": " Now I'm also interested in hearing what the other side, the other side, the people who are pro-AI, unfettered AI, who say, hey there's nothing to see here, you are all being hyperbolically hysterical. I'd like to see someone respond to what Daniel has said about AI but also civilizational risks in general and how AI exacerbates those. So if you think of any guests,"
    },
    {
      "end_time": 15268.814,
      "index": 600,
      "start_time": 15240.623,
      "text": " who would serve as great counterpoints, especially those who are researchers in machine learning, then please suggest them in the comments section. If you're a professor and you're watching and you'd like to have a friendly theolocution, that means a harmonious incongruity, a good-natured debate where the goal isn't to debate but to understand one another's point of view. If you're watching this and you think, hey, I would like to come on to the Theories of Everything channel as a professor along with my other professor friend who believes something that's antithetical to what I believe about AI risk, then"
    },
    {
      "end_time": 15283.404,
      "index": 601,
      "start_time": 15268.814,
      "text": " please message me you can find my email address i'm sure you can also leave a comment yeah and who knows when the next episode of toe is coming out by the way the next one is going to be john greenwald should be in about a week or a week and a half all right let's get back to this with daniel schmartenberger"
    },
    {
      "end_time": 15312.978,
      "index": 602,
      "start_time": 15283.985,
      "text": " Well, this is a great place to end. Daniel, you're now speaking directly to the people. Well, you have been this whole time, but even more so now to the people who have been watching and listening. What's something you want to leave them with? What should they do? They're here. They've heard all these issues. They hear Bohmian. They're like, OK, that sounds cool. That's motivating. It's a bit abstract, but it is motivating. OK, what should I do, Daniel? I want the earth to be here in decades from now, centuries from now. What should I do?"
    },
    {
      "end_time": 15339.548,
      "index": 603,
      "start_time": 15313.968,
      "text": " So I'm going to answer this in a way that I think factors who your audience probably is. I don't know, we even shared demographics with me, but based on the attractor, I can guess. If I was answering just to a series of technologists or investors or bureaucrats, I might say something different."
    },
    {
      "end_time": 15361.442,
      "index": 604,
      "start_time": 15340.026,
      "text": " and realizing that amongst that audience, there are people who are going to have radically different skills and capacities and parts of it that they feel the most motivated and oriented to. So I'm obviously not going to say one thing everybody should do. Okay, what I'll say is"
    },
    {
      "end_time": 15394.428,
      "index": 605,
      "start_time": 15370.862,
      "text": " Whether it's hearing a conversation like this where the planetary boundaries and really thinking about how that there's more biomass of animals and factory farms and there are in the wild left of the total amount of species extinction or the what the risks associated with rapid development of decentralizing synthetic biology and AI are you hear these things you're like fuck and it connects you to"
    },
    {
      "end_time": 15422.142,
      "index": 606,
      "start_time": 15395.35,
      "text": " what is most important beyond your own narrow life or even the politics that is coming into your stream. Or whether it's when you have a deep meditation or a medicine journey or whatever it is and connect to what is most meaningful. Design your life in a way where that experience happens regularly. So what you are paying attention to and optimizing for on a daily basis is connected to the deepest values you have."
    },
    {
      "end_time": 15451.493,
      "index": 607,
      "start_time": 15423.08,
      "text": " Because on a daily basis, the people around you and your job and your newsfeed are probably sharing other things. So try to configure it that the deepest true good and beautiful that you're aware of is continuously in your awareness. So your daily choices of how you spend your time and money is continuously at least informed by that. That's the first thing I would say. I'll say a couple of other things. Aligned with that is"
    },
    {
      "end_time": 15480.606,
      "index": 608,
      "start_time": 15455.009,
      "text": " look at things that are happening in the world online to have a sense of things that you can't see in front of you, but then also get offline and connect with both the trees in front of you and without any modeling or value system, just how innately beautiful they are, and also the mirror neuron experience when you're with a homeless person."
    },
    {
      "end_time": 15502.244,
      "index": 609,
      "start_time": 15480.998,
      "text": " So both have a sense of what's happening at scale, but then also ground an embodied sense, your own care for the real world that is not just on a computer. There's a real world here. And then realize deep in, should actually matters. Independent of whether I can formalize a particular"
    },
    {
      "end_time": 15525.418,
      "index": 610,
      "start_time": 15502.995,
      "text": " meaning or purpose of the universe argument or formalize a response to solipsistic arguments or nihilistic arguments like prima facie, reality is meaningful. And I actually do care. I wouldn't get sad or upset or inspired if I didn't care about anything. I actually do care."
    },
    {
      "end_time": 15555.23,
      "index": 611,
      "start_time": 15526.288,
      "text": " And so life matters and I make choices and I can make choices that affect the world. So my own choices matter. So what choices am I making every moment? And what is the basis that I want to guide them by, right? To just deepen the sense of the meaningfulness of life in your own choice and the seriousness with which you take how you design your life, particularly factoring the timeliness and eminence of the issues that we face currently."
    },
    {
      "end_time": 15587.193,
      "index": 612,
      "start_time": 15557.483,
      "text": " And then the last thing I would say is as you could like really work to get more educated about the issues that you care about and are concerned about, really work to get more educated about them, get more connected to the people working on them and really study the views that are counter to the views that naturally appeal to you. So you bias correct so that your own"
    },
    {
      "end_time": 15617.602,
      "index": 613,
      "start_time": 15587.654,
      "text": " Thank you, Daniel. I appreciate you spending almost four hours now"
    },
    {
      "end_time": 15646.254,
      "index": 614,
      "start_time": 15617.875,
      "text": " Likewise. We covered a bunch of areas that I did not expect, but they're all good areas. I'm curious how the thing ends up getting edited and makes it through. And I'm also curious with your particularly philosophically interested and insightful audience, what questions and thoughts emerge in this and maybe we'll get to address some of them someday. Yeah, there's definitely going to be a part two."
    },
    {
      "end_time": 15675.06,
      "index": 615,
      "start_time": 15646.749,
      "text": " Cool. A much more philosophical part too if this one wasn't already. The podcast is now concluded. Thank you for watching. If you haven't subscribed or clicked on that like button, now would be a great time to do so as each subscribe and like helps YouTube push this content to more people. You should also know that there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, disagree respectfully about theories, and build as a community our own toes. Links to both are in the description."
    },
    {
      "end_time": 15701.886,
      "index": 616,
      "start_time": 15675.06,
      "text": " Also, I recently found out that external links count plenty toward the algorithm, which means that when you share on Twitter, on Facebook, on Reddit, etc., it shows YouTube that people are talking about this outside of YouTube, which in turn greatly aids the distribution on YouTube as well. If you'd like to support more conversations like this, then do consider visiting theoriesofeverything.org. Again, it's support from the sponsors and you that allow me to work on Toe full-time."
    },
    {
      "end_time": 15718.985,
      "index": 617,
      "start_time": 15701.886,
      "text": " Thank you."
    }
  ]
}

No transcript available.