Audio Player
Starting at:
⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.
Transcript
Enhanced with Timestamps
190 sentences
11,681 words
Method: api-polled
Transcription time: 78m 29s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
AGI in five years or something from now. What's cool is how uncontroversial these statements are in so many circles are now, right? We will have massively superhuman AGI that will exceed humans in essentially every respect of intelligence. So I mean, I think what happens to human society during that transitional phase is very nasty and difficult.
Today we talk about large language models, artificial general intelligence, and merging with machines. Ben Gortzel is a computer scientist, a mathematician, and an entrepreneur. He's the founder and CEO of Singularity Net, and his work focuses on AGI, which aims to create truly intelligent machines that can learn, reason, and think like humans.
This was another talk that was conducted at the Center for the Future Mind at the MindFest conference at the beautiful beach of Florida Atlantic University. Expect to hear more discussions on AI by speakers like Gortzel. Gortzel will also appear with David Chalmers on an upcoming episode, as well as David Chalmers on his own, as well as Wolfram on his own, as a part two to the lecture that he's already given. Part one for Wolfram is listed here. This will occur over the next few days and weeks on the Theories of Everything channel.
As usual, links to all are in the description. I would like to thank Susan Schneider, Professor of Philosophy, because this conference would not have occurred without her spearheading it, without her foresight, without her inviting theories of everything as the exclusive filmer of the event, as well as we're going to host a panel that will be on later with David Chalmers and Susan Schneider. I also would like to thank Brilliant for being able to defer some of the traveling costs. Brilliant is a place where there are bite-sized interactive learning experiences for science, engineering, and mathematics.
artificial intelligence in its current form uses machine learning which uses neural nets often at least and there are several courses on brilliance websites teaching you the concepts underlying neural nets and computation in an extremely intuitive manner that's interactive which is unlike almost any of the tutorials out there they quiz you i personally took the course on random variable distributions and knowledge and uncertainty
because I wanted to learn more about entropy, especially as there may be a video coming out on entropy, as well as you can learn group theory on their website, which underlies physics, that is SU3 x SU2 x U1 is the Standard Model Gauge Group. Visit brilliant.org slash TOE to get 20% off your annual premium subscription.
As usual, I recommend you don't stop before four lessons. You have to just get wet. You have to try it out. I think you'll be greatly surprised at the ease at which you can now comprehend subjects you previously had a difficult time grokking.
Thanks for inviting me and it's a really, it's a
Fun time to be at a conference of this nature with all the buzz around the AI and intelligence and AGI and so forth. I mean as Rachel alluded
I've been doing this stuff for a while, like many of the speakers here. I mean, I did my PhD in math. I did my PhD in math in the late 80s, but even at that point I was interested in AI and the mathematics of mind and in implementing the mathematics of mind in computers. And of course, most people in
This room, no, though most people on the planet do not, that the AI field was already quite old by the 1980s, right? I mean, now, you know, in the Uber ride over here, I told the lady driving the Uber, I was going to a conference on basically
Machines that think like people how to make machines that machines think like people. I mean she obviously had no tech background I said I've been working on this since the 80s. So first of all, she's like oh I had no idea people working on this that long ago. Secondly, she's like but I
I thought that had already been done in machines because I already think like people, right? So I mean her assumption is that it had already been solved and was running in the background making some billionaires money and like that was just the state of things, right? So, but it's interesting. That certainly wasn't the case five or ten years ago, right? But I think, you know, folks in this room are aware that work on these topics has been going on
a very long time and also that many perhaps almost all of the core ideas underlying what's happening in AI now are also fairly old ideas which have been improved and tweaked and built on of course as better and better computers and more data and better network and so on have allowed
allowed implementation of things just at a larger scale, thus experimentation at a larger scale. So, you know, it's been fascinating to hear talks on fundamental nature of consciousness and consciousness in babies and then organoids and then on the structure and dynamics of the physical universe being addressed
using data structures and dynamical equations really more characteristic of AI and computer science than the physics. So there's clearly fascinating sort of convergence and cross-pollination going on with biology, psychology, physics, computer science, math. I mean, more and more everything is coming together. What I want to focus on in my talk
today is what I think are viable paths to get from where we are now, where we have machines that can fool the random Uber driver into thinking that they are human-like general intelligences. How do we get from where we are now to machines that actually are human-level general intelligences and
I believe shortly after that we will have machines that are vastly greater than human-level general intelligences. I'll say a little bit about that, but I'm going to focus more on the path from here to human-level AGI.
Few preliminaries before I get into that. I'll talk a little bit about these GPT type systems which are sort of the order of the day and a little bit about how I connect intelligence with consciousness. I'll try to go through these huge topics fairly rapidly and then move on to approaches to engineering AGI. So I think regarding
Chat GPT, other transformer neural networks, a bunch of correct and interesting things have been said here already. I mean, I was also impressed and surprised by some aspects of the function of these systems. I was also not surprised that they lack
certain sorts of fundamental creativity and the ability to do sustained precise reasoning. And I think, you know, while I was surprised at just how well
Chat GPT and similar systems can sort of bullshit and bloviate and write college admission essays and all that. In a way, I'm not surprised that I'm surprised because I know that my brain doesn't have a good intuition for what you do when you take the entire web and feed it into a database and put some smart look up on top.
Just like I know my brain is bad at reasoning about the difference between a septillion and a nonillion. We're not really well adapted to think about some of these things that we don't come across in our everyday lives. So even now, if you ask ChatGBT to compose a poem, and it does, I don't have a good intuitive sense for
How many poems of roughly similar nature are on the web and were fed into it? You could do that archaeology with great time and effort by putting probes into the network while it does certain queries, but that's a lot of work. What's intriguing to me as someone who's written and thought a lot about general intelligence is these systems achieve what
appears like a high level of generality relative to the individual human, right? They're very general compared to an individual human's mind, but they achieve that mostly by having a very, very broad and general training database, right? They don't do big leaps beyond their training database, but they can do what appears very general to an individual human and
without making big leaps beyond their training database, because their training database has the whole fucking web in it, right? I mean, that's a very interesting thing to do. It's a cool thing to do. It may be a valuable thing to do, right? It may be that using this sort of AI, not GPT in particular, but large language models, transformer neural nets of this character, done cross-modally, integrated with reinforcement learning, I mean, it may be that with this type of AGI,
Excuse me. It may be that with this type of narrow AI system, even falling short of general intelligence in the human sense, I won't be shocked if that ultimately absolutes like 95% of human jobs, right? I mean, there's a lot more work to be done to get there, of course, and many jobs involve
physical manipulation and integrating LLMs with robots through physical manipulation is hard as we see from these robots here like this dog whose head just fell off and Sophia who's on a tripod, although she's been on wheels and legs at various times. So there's a lot of engineering work, there's a lot of training and tuning work, but fundamentally I wouldn't be shocked if
Very high percent of jobs that humans now get paid to do could be could be obsolete by this sort of technology. And I mean, there are going to be jobs like a preschool teacher or hospice care therapist that you just want to be done by a person because it's about it's about human to human connection, just like we see live music, even though recorded music may sound better, because we like being in the room with people, other humans playing music, right, but
That's a minority of jobs that people do, and there are also jobs doing things that
These sorts of neural nets, I think, will never be capable of, and I'll come to that in a moment, though I think different sorts of algorithms could do it. But it's not a big percent of human jobs, right? So one lesson to draw here is almost everything people get paid to do is just rote and repetitive recycling of stuff that's already been done before, right? So if you feed a giant neural net with a lot of examples of everything that's been done before that can then pick stuff out of this database and merge them together judiciously,
Okay, you eliminate most of what people get paid to do. It takes a little bit of time to roll this out in practice, not necessarily that long. I know some friends of mine who were from the AGI research community started a company called Apprenti maybe four or five years ago. They started out wanting to build AGI. Their VC investors channeled them, as would be the usual case, into doing some particular application instead. What they ultimately did was automate the McDonald's drive-through.
They sold the company to McDonald's maybe two and a half years ago. It's now their technology is starting to get rolled out in some real McDonald's around the world, right? So you're getting that guy who sits behind the drive-through window listening to stuff over that noisy horrible microphone like give me a Big Mac and fries, hold the ketchup, right? So they're finally automating away these people which
What's one thing that's interesting to me there is to see from that technology being shown to work
to it actually being deployed across all the McDonald's is taking at least five years, right? So I mean it's obvious that could be automated to me a long time ago. It was shown two and a half years ago that it could work on some McDonald's, but it's still not rolled out everywhere. It's rolled out in certain states, right? But then even like replacing the guy pushing the hamburger on the cash register with a touchscreen where you push the hamburger yourself, like even that's taking a long time to get rolled out.
You know, these practical transitions will take a while. They're really, really interesting, but there's some things I think are held back not by practical issues, but by fundamental limitations of this sort of technology. In essence, I think these are anything that intrinsically requires
Taking a big leap beyond everything you've seen before and this sort of gets it the fundamental difference between what I think of as narrow AI and what I think as AGI. What I think of as AGI, artificial general intelligence, which is the term I introduced in 2004 or something in an edited book by that name from Springer. And this refers to a system
that has a robust capability to generalize beyond its programming and training and its experience and sort of take a leap into the unknown and that you know every baby does that, every child does that. I mean I have a five-year-old and a two-year-old now and three grown kids and every one of them has made an impressive series of wild leaps into the unknown like as they learn to do stuff that we all consider
Basic. Now that doesn't mean an AI system couldn't do the same things a two and five-year-old can do without itself making a leap into the unknown. It can do it by watching what a billion two-year-olds did and interpolating, but kids still do that. In terms of job functions that adults do, I mean, doing impressive science
Almost always involves making a leap into the unknown. I mean, there's a bunch of garbagey science papers out there. But if you look at the Facebook Galactica system, which was released and then retracted by Facebook, I mean, a large language model for generating science papers and such, you can see the gap between what large language models can do now and even pretty bad mediocre science. Like what Galactica sped out was pretty much science looking gibberish. Like it, you know, it sped out like
you ask it, tell me about the Lenin-Ono conjecture and it will split out some trivial identity of set theory invented by John Lenin and Yoko Ono and it's amusing but it's not able to do science at the level of a mediocre master student, right, let alone a really strong professional researcher and I mean the part of the core reason there
is about taking a step beyond what was there. It's specifically not about just recombining in a facile way what was there before. So writing an undergrad essay for like English 101 kind of is about making a facile recombination of what was there before. So that's already automated away and we have to find other ways to attempt to assess undergrad students, right? So I mean in music I would say synthesizing like a new 12 bar blues song
There's no release system that can do that now but I'm sure it's coming in the next few years and some folks on my team in Singularity Net are working on that too. I mean Google's model music LM goes part way there but it's not it's not released it's clear how to do better. On the other hand think about Think Verizon the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store
If you fed a large language model or other comparable neural net with all music
Composed up to the year 1900, let's say, just supposing you had it on the database. Is it going to invent jazz? Is it going to invent progressive jazz? Is it going to invent rock? You could ask it, let's put West African drumming together with Western classical music and church hymns. It's going to give you
Mozart and swing low sweet chariot with a West African polyrhythmic beat, which may be really cool, but it's not going to bring you to Charlie Parker and John Coltrane, Jimi Hendrix and Spongel or whatever else. It's just not going to do it.
There's a sense in which this sort of creativity is combinatorial, right? I mean, jazz is putting together West African rhythms with Western church music and chord progressions, and rock is drawing from jazz and simplifying it and so forth. But that type of combination that's being done when humans do this sort of fundamental creativity, it's different than the type of combination the chat GPT type system is doing, which really has to do
with how knowledge is represented inside the system. So I think these systems can pass the Turing test. They may not quite yet, but I think they can certainly, if you're talking about fooling a random human that it's a human, it can probably already do that. I suspect without solving the ATI problem you could create a system that would fool me or anyone into thinking it was a human in a
in a conversation because so many conversations have been had already and people aren't necessarily that clever either, right? I don't think these systems could pass a five-year long Turing test because I could take a person of average intelligence
And I could teach them computer programming and a bunch of math. And I could teach them to build things and so on over a long period of time. And I don't think you could ever teach GPT-4 or chat-GPT in that sort of way. So if you give me five years with a random human, I could turn them into a decent AI programmer and electronics engineer and so forth. And that goes beyond, right? But Alan Turing didn't define the Turing test.
as five years long. He defined it as a brief chat. But of course, he wasn't imagining what would happen if you put all of the web into a lookup table either, right? Because he was very smart, but that was a lot to see at that point in time. So I think, I mean, another example of something I think this sort of system wouldn't be able to do
Let's say business strategy or, you know, political policy planning at the high level because they're the nature of it is you're dealing with the world that's changing all the time and always throwing something new and weird at you that you didn't expect. And if you're really just recombining previous strategies, it's not a terrible thing to do, but it's not what has built the greatest companies. What's built the greatest companies is
You know, pivoting in a weird way, making elite beyond experience. So, I mean, there certainly are things humans do that go beyond this sort of facile, large-scale pattern matching and pattern synthesis, but it's interesting how far you have to reach to find them. On the other hand, it does mean if you had a whole society of chat GPTs,
It will never progress, right? I mean, and some people might like that better, but it would genuinely be, it will be stuck, stuck now in its shallow derivations of where you could get from now, right? It's never going to make another, it's not going to launch the singularity. And it's, there's a lot of other smaller things in that it's not going to do. So I dwelt on that a bit, partly because it's topical, but partly because I think it frames the discussion on
general intelligence reasonably well in the sense that it highlights quite vividly what isn't a general intelligence, right? Now, what is an intelligence is a bigger and subtler question, obviously, and I'm going to mostly bypass
Problems of consciousness that were discussed here this morning not not because they're not interesting, but they're just that's a whole whole subject area in itself, and I don't have that much time, but I mean I'm Fundamentally I'm somewhat panpsychist in in orientation. So I mean I tend to think that you know this microphone is
has its own form of consciousness and I don't care much if you want to call it consciousness or proto-consciousness or blah blah blah or whatever but I think that the essence of what it is to experience to me is just imminent in everything and it does manifest itself differently in the human brain than in a microphone and how similarly it will manifest itself in a human level AGI from a biological brain
It's a very interesting question and it probably depends on many aspects of how similar that AGI is to the human brain. So like what's the continuity between structure and dynamics of a cognitive system and the experience felt by that system? Is a small change in structure and dynamics obviously do a small change in the felt experience? There's a lot of fascinating
Subtle questions there, which I'm going to punt on for now. We can give another talk about that. That's another time but What is intelligence? It's a slightly more interesting and relevant question. I also think not such a critical one like
Fussing about what is life is not much of what biologists do. And you can do a lot of progress in synthetic biology without fussing around about what is life and worrying about whether a virus really is or isn't alive. Like who really cares? It's a virus. It's doing its thing. It has some properties we like to call lifelike. It lacks some others. And synthetic biology systems, each of them may have some properties we consider lifelike and lack some others. That's fine.
But there's still something to be gained by thinking a little bit about what is intelligence, what is general intelligence. Marcus Hooter had a book called Universal AI published in 2005 or something. He proposed a formalization of intelligence which basically is the ability to achieve computable reward functions in computable environments.
You have to weight them, so you're averaging overall reward functions in all environments and what he does is he weights the simpler ones higher than the more complex ones and this leads to a bunch of fairly vexed questions of how do you measure simplicity and how equivalent is it to transfer between one measure of what environments rewards are simpler than others. But one thing that's very clear when thinking about this sort of definition of intelligence is
Humans are pretty damn stupid. We're very bad at optimizing arbitrary computable reward functions in arbitrary environments. For example, rounding a 708-dimensional maze is very hard for us, which is not even a complex thing to formalize. We learn to run a 2D maze, 3D maze maybe. Beyond that, most people become very confused, but then
I mean, in the set of all computable environments and reward functions, there may be far more higher dimensional mazes than two or three dimensional mazes, depending on how you're weighting things, right? Let alone fractal dimensional mazes. I mean, so there's a lot of things we're just bad at. We come out very dumb by that criterion, which may be okay. We don't have to be the smartest systems in the universe. An alternate
A more philosophically deep way of thinking about intelligence was given by my friend Weaver, aka David Weinbaum, in his PhD thesis from the Free University of Brussels, which was called Open-ended Intelligence. He's going back to continental philosophy in the Deluse and Qatari and so forth. He's looking at
Intelligent systems as a complex self-organizing system, which is driven by the dual complementary and contradictory drives of individuation, trying to maintain your boundary as a system, which relates somewhat to autonomy as it was discussed earlier today, but is I think more clearly defined.
Individuation and then self-transcendence, which is basically trying to develop so that the new version of yourself, while connected by some continuous thread with the older version of yourself, also has properties and interactions that the old version of yourself could never understand. Of course, all of us have both individuated and self-transcended over the course of our human lives. Human species has also.
This doesn't necessarily contradict Marcus Hutter's way of looking at it. I mean, you could say through the iterated process of individuation and self-transcendence, maybe we've come to be able to optimize even more reward functions and even more environments, right?
All these abstract ways of looking at things don't really give us a way to tell how much smarter a human is than an octopus, or how smart a chat GPT is relative to Sophia, or exactly how far we've progressed toward AGI. I think all these theoretical considerations have a lot of mathematical moving parts that are quite abstract.
In practice what we see is most people will give credit to chat GPT for being human level AGI even though experts can see it isn't. I had posed a while ago what I called the robot college student test where I figured if you had a robot say a couple dot versions ahead of this one
You have a robot that can go to, let's say, MIT, do the same exact things as a student, roll around the classes, sit and listen to the assignments, take the exams, do the programming homework, including group assignments, and then graduate. And then I figure then I'm going to accept that thing is, in effect, a human level general intelligence. And I mean, I'm not 100% on that, so I might be able to hack that. But you can see the university is set up
It is set up precisely for that purpose, right? I mean it's set up to teach, it's set up to teach a science university especially, it's set up to teach the ability to do science, which involves leaping beyond what was known before, and it's set up to try to stop you from cheating also, right? So I mean I'm assuming that the robot isn't going to class and cheating by like sending 4G messages to some scientists in Azerbaijan or something, but like going
going through it in a genuine way. But again, that sort of test, you could argue about the details, it's measuring human-like general intelligence. I mean, it's very clear you could have a system
that's much, much smarter than people in the fundamental sense, but misses social cues so that it wouldn't do well in group assignments in college or something. And you can see that from the fact that there are autistic geniuses who are human and would miss social cues and do poorly in group assignments. And they're still within the scope of human systems. So I'd say fundamentally, you know, articulating what is intelligence
It's an interesting quest to pursue. I'm not sure we've gotten to a final
consensus on what is intelligence that bridges the abstract to the concrete. I'm not sure that we need to. It's pretty clear we don't need to. Like we could make a breakthrough to human level AGI and even superhuman AGI and we still haven't pinned down what is intelligence. I mean just as I think we could do synthetic biology to make weird new freakish life forms come out of the lab without having a consensus on like fundamentally what is life.
I don't see any reason there's one true golden path. I mean, I think a well-worn but decent example
is manned flight. I mean, you've got airplanes, you got helicopters, you got you got spacecraft, you got you got blimps, you've got probably ways of flying that we have pedal powered flight machines, you probably many things that we haven't, we haven't thought of rid that right. I mean, so I mean, there you add the fundamental principles of aerodynamics and fluid mechanics. And when you know that you can figure there's a lot of different ways to fly. And I think
There's going to be a lot of different ways to make human-level general intelligence. Some will be safer than others, just like blimps blew up more than other modes of flying. Some will be easier to evolve further and further. Some ways of flying in Earth's atmosphere are more easily evolved than the ways of flying into space. How the air balloon doesn't turn into a spacecraft
As well as you could you could take an airplane and sort of morph that to make it a spacecraft but So I think there's going to be multiple different routes and I'm going to briefly mention now Three routes that I think have actual promise one of which is what what I'm currently mostly working on so the first route I think has actual promise is
actually trying to simulate the brain and again the people in this room are among the small percent of the human population who realize how badly current computer science neural nets fare if you think of them as brain simulations. I mean the formal neuron embodied by some threshold function bears very little resemblance to a biological neuron and
Even if you want to look at equation models, you have like Izhekevich's chaotic neuron model. I mean, you have Hodgkin-Huxley equation. I mean, you have mathematical models of a neuron that also aren't quite right, but they at least try. And what's inside current computer science neural nets don't try. Then you have astrocytes, glia, all these other cells in the brain that are known to be helpful with memory. You have all this chemistry in the brain. You have extracellular charge diffusion through the extracellular matrix in the brain, which gives you EEG waves.
You've got a lot of stuff in the brain we don't understand that well and we're not modeling in any computer science neural net system. You also have a few cases known of wet quantum biology doing stuff in the brain and how relevant they are to thinking in the brain is an unknown. But even without going quantum we don't know enough about the brain to make a real
computational brain simulation. There's no reason we couldn't. I had a devious plan for this involving Sophia which hasn't taken off yet. So what I planned is to make her a big fashion star so that having a transparent plate on the back of your head was viewed as highly fashionable. Then you get people to remove the back of their skull and replace it with a transparent plate like Sophia has because it looks really cool.
But then once people have that transparent plate, then you can put like 10,000 electrodes in there and measure everything that's happening in their brain while they go about their lives in real time. With that sort of data, you might be able to make a crack at really doing a biological simulation of the brain. And hopefully someone invents a less hideous and invasive mode of brain measurement, right? I mean, things like fMRI and PET are incredible physics technologies.
I feel like if we got another incredible physics technology to scan the dynamics of the brain with high spatial and temporal precision, we might gather the data we need to make a real brain simulation and, you know, brain measurement is exponentially getting better and better. It's just so far the exponent isn't as fast as with computer software and AI, right? But it's coming along.
Even without better brain simulations, I think we could be doing a lot better. I mean, no one is devoting humongous server farms to, you know, huge nonlinear dynamics simulations of all the different parts of the brain using, say, Izhekevich neurons and chaotic neural networks. I mean, if the amount of resources one big tech company puts into transformers,
We're put into making like large-scale nonlinear dynamics simulations of brain based on detailed biology knowledge. I mean we would gain a lot. We would learn a lot beyond where we are now. We still don't have data on astrocytes and glia and a lot of neurochemistry, right? So we're still missing a lot. Interesting to think about the strengths and weaknesses of that approach though. I mean one weakness is we don't have the data.
Another weakness would be once you have it, all you have is another human in the computer. We've already got billions and billions of irritating humans, right? I mean, granted, that's a human where you can probe everything that happens in their digital brain, right? So then you can learn a lot from it. But the human brain is not designed according to modern software engineering principles or hardware engineering principles, right? For better and worse.
Short-term memory seven plus or minus two. What if you want to jack that up a bit like there's not Probably not a straightforward way to do that. That's probably wrapped up in weird nonlinear dynamic feedbacks between you know, the the hippocampus cortex that thalamus and so and so forth that you were we're not designed to be modded and upgraded in a flexible way and
We do have some interesting adaptive abilities, like if you graft a weird new sense organ into the brain, the brain will often adapt to being able to sense from it. But there's weaknesses, and then there's potential ethical weaknesses also. I mean, the maxim that absolute power corrupts absolutely
was this was a sort of partial truth formulated by observing humans. It's not necessarily a truth about all possible minds. But if you're making a human in a computer, if you do find a way to jack up its intelligence, then maybe you're creating like a horrible science-fictional anti-hero, like this human who lives in a computer and is smarter than everyone else. But no, it will never really have a human body. I mean, we can see how the movie ends. But that's anyway one possible route.
Another possible route, which is very interesting to me and will be a lot of fun, but is not something I'm putting a lot of time into right now, is a more artificial life type approach. I mean the field of A-Life had a peak in the 90s or something. You were trying to make these sort of ecosystems of artificial organisms that would then evolve smarter and smarter little creatures. Didn't go as well as it wanted, but of course
You know, when I was teaching neural nets at University of Western Australia in the 90s, it took three hours to run a 30 neuron network with their current back prop and everyone was bitching that neural nets are bad because they're too slow and will always be too slow, right? So it could be that what happened with artificial, with neural nets can also happen with a life, right? I mean, it could be
Just scale. Certainly the ecosystem has a lot of scale, right? And what you find is when you have more scale, you can screw around with the details more and find out what works.
Seemed like an artificial life, never found quite the right artificial chemistry to underlie the artificial biology. Not that many things were tried. A guy named Walter Fontana had a cool system called algorithmic chemistry in the 90s and then early aughts where he was taking little Lisp programs and just made a big soup in which Lisp codelets would rewrite other Lisp codelets and just trying to get autocatalytic networks to emerge out of that.
didn't go that well but I mean the amount of computational firepower being leveraged there was was very very small right so it seems seems like I mean there's an argument against it which is like it took billions of years for life to emerge on earth with a very large number of molecules involved with doing randomish sort of things on the other hand
We can take leaps, we can watch experiments, we can fine-tune things as a human, like more aggressively than the Holy Creator appears to have done with evolution on Earth. Again, this is something that gets very little attention or resources now, but it would be really interesting to see what
like a Google scale experimentation in artificial life would lead. There's not an obvious commercial upside to early stages of that sort of research as compared to the question answering systems or something. I have some further ideas on how to accelerate artificial life, but I'll mention that at the end because they involve my third plausible route to create AGI systems.
which is what I'm actually working on now and I'll just give a few minutes so that I've given a lot of a lot of talks on it before which you can you can you can find online. So in terms of name brand systems that would be AGI systems I'm working on now is called OpenCog Hyperon which is a new version of the OpenCog system. So we had a system called OpenCog
launched in 2008 based on some older code before that. Now we're making it pretty much from the ground of rewrite of that called Hyperon, but the ideas underlying it could be levered outside that particular name branded system and one way to look at this is that we're hybridizing neuro, symbolic and evolutionary systems.
but not necessarily old fashioned sort of crisp predicate logic. I mean, for those who are into wacky logic systems, it's probabilistic, fuzzy, intuitionistic, paraconsistent logic. So it's a which sort of means probabilistic and fuzzy, probably you know what they mean. Paraconsistent means you can hold two inconsistent thoughts in its head at one time without going going ape shit. Intuitionistic
Pretty much means it builds up all its concepts from experience and observation. So logic theorem prover, right? So we're trying to do with symbolic stuff by actual logic theorem proving. We're using neural nets for recognizing patterns in large volumes of data and synthesizing patterns from that, which they have obviously shown themselves to be quite good at.
We're using evolutionary systems and genetic programming type systems for creativity because I think mutation and crossover are a good paradigm for generating stuff that leverages what was known before but also goes beyond it. But again it depends on what is the level of representation at which you're doing the mutating and crossing over. So we're integrating neural
symbolic and evolutionary methods, not by saying, okay, neural is in the box, symbolic is in the box, evolutionary is in the box, and then the boxes are communicating across these channels. What we're doing, we're making this large distributed knowledge metagraph. A metagraph is like a graph
But you can have links that span more than two nodes, like three, four, five, or a hundred nodes. And you can have graphs pulling the whole sub-graphs. So a hypergraph is a graph which has n-ary as well as binary links. A metagraph goes beyond. You can have links pulling the links or links pulling the general sub-graphs. So we have a distributed knowledge metagraph. There's an in-ram version of the knowledge metagraph also.
We represent neural nets, logic engines, and evolutionary learning inside the same distributed knowledge metagraph. So in a sense, you just have this big graph, parts of it represent static knowledge, part represent active programs. The active parts run by transforming the graph, and the graph represents the intermediate memory of the algorithms also. So you have this big self-modifying, self-rewriting, self-evolving graph.
And the initial state of that graph is that some of it represents neural nets, some of it represents symbolic logic algorithms, some of it represents evolutionary programming, some of it just represents whole bunches of knowledge which could be fed in from databases, they could be fed in by knowledge extraction from large language models or that they could be fed in from pattern recognition on sense perception, right? And to go deeper than this into what we're doing with hyperon
Involves more math than I could go into here, especially without the presentation or anything. But if you look at it, there's a paper I wrote and posted on ArcCyber a couple years ago called The General Theory of General Intelligence. And what I go into there is how you take
Neural learning, probabilistic programming, evolutionary learning, logic theorem proving, you represent them all in a common way using a sort of math called Galois connections. So I use Galois connections to boil these AI algorithms all down to fold and unfold operations over metagraphs. So that's probably gibberish to anyone without some visibility into the functional programming theory literature, but I guess the takeaway from that is
We're trying to use advanced math to represent neural symbolic and evolutionary learning as separate views into common underlying mathematical structures so that they're all kind of different aspects of the same meta algorithm rather than different things living in separate boxes. Now there's a connection between this and
the artificial life approach to AGI, which I would love to approach at some point. And the connection is, if you were brewing a bunch of artificial life populations on many different machines around the world, wouldn't it be interesting to shortcut evolution and train a smart machine learning system to predict which artificial life populations had promise and kill the ones that didn't early, right?
You couldn't do that too aggressively or you're going to kill the hopeful monsters, right? But you could certainly identify a lot of things that just aren't promising and identify something early as really promising and make multiple clones of it, right? So the idea of a narrow AI and then eventually AGI like evolution master to help
brew the artificial life soup seems really really quite interesting to me and maybe could shortcut past the like four billion years of however many billion years life has been evolving on on on earth problem right so i think i mean of course there's also ways more and more advanced ai can help with the neuroscience approach to agi also i mean there's no doubt
I mean, machine learning is already all over neuroscience, so there's no doubt that steps toward AGI could help with inferring things about how the brain works from available neuroscience data. I still think you may fundamentally need more data than we have now. So those three approaches, I think, are all promising and could work. And finally,
I want to briefly note the role of hardware in all this, just for a couple minutes, because that's sort of what ended up bringing me here to Florida right now, actually, was the hardware side of things. So if you look at, you know, what caused neural nets to transition the way that they did? I mean, we were all doing neural nets for decades. They were slow, they were conceptually intriguing, but they weren't doing incredibly, incredibly amazing things.
The reason they took off so much is pretty much porn and video games, right? I mean, it's because GPUs became so popular and the GPUs do matrix multiplication really fast and they plug them into regular PCs. They do matrix multiplication across many processors concurrently. But lo and behold,
Matrix multiplication is also what you need for running many simulations in areas of science, and it's also what you need for running neural nets quickly. So it turned out that these GPU cards, which were created for video games and video rendering, these turned out to be the secret sauce for scaling up
Neural nets just so they could run faster and I mean in 1990 when I was a professor at University of Nevada Las Vegas we had a 10 million dollar Cray YMP supercomputer. It could do 1,000 things at a time which was so much for them 10 million dollars. Remember we programmed it in sort of parallel Fortran and I mean now
GPU, of course garden variety GPU can do more than a thousand things at a time and each of those things is done much faster than the Cray did. So we were playing then with neural nets on this supercomputer. We saw what it could do, but now I mean you have multi GPU servers and racks and racks and racks in them, right? So clearly the hardware innovation, it didn't
Exactly let you take the code we were running in the 80s and 90s and make it work better, but it let you Experiment with that code see what work what didn't work tweak it tweak it tweak it with fast experimentation Find something conceptually fairly similar that does amazing stuff and so one one question is what hardware would let you
pursue these other three approaches to AI that I outlined way, way better than has been done historically and for brain simulation.
I think it's clear what you need are actual neuromorphic chips, right? I mean, most of what are called neuromorphic chips are not so much, but you can, I mean, you can take Isaac Kevich's chaotic neuron and put it on a chip and there's some research papers on it, though it's not being done at scale. I mean, you could take glia and astrocytes and put analysis about them on the chips. I mean, you could, you could try really hard to make an actual neuromorphic chip to drive large scale brain stimulation on the,
On the side of hybrid architectures, I'm actually working on a novel AGI board together with Rachel St. Clair, who introduced me up here, who's a postdoc here and who invited me to come speak here. So Rachel had designed this hypervector chip, which puts on hardware very fast manipulations of very high dimensional bit vectors.
which gives faster ways to implement neural nets, but also faster ways to do various things with logic engines. I developed a chip that allows you to do pattern matching on graphs very fast by putting the graph on hardware. So we figured we can put our hypervector chip, my graph pattern matching chip, deep learning GPU and the CPU, put them on the same board, connect them with very modern fast processor to processor interconnect. Now, maybe if you do that,
you'll have a board that does for this sort of hybrid neuro symbolic evolutionary system, something similar to what GPUs did for neural nets, at least it's a plausible hypothesis. So we're going through the simulation process and looking for manufacturers and so forth. But again, that's both a real project, which I think is cool, done through Rachel's company Simuli, and it's a sort of
just case in point right like we should see we should see a flourishing of more diverse sorts of hardware that bake diverse sorts of AI processing into the hardware and that that's as important as experimenting on the software because we can I mean we can see historically it's a lot of what led us where we are with with neural nets today. So yeah to briefly wrap up I mean it's a
Super exciting point in the history of AI. We have systems that do more human-like stuff than ever before. I think they're not AGI's and cognitive science thinking is very useful for understanding the ways in which they're not intelligent like humans are. On the other hand, I think many of the same underlying technologies are going to be useful for
for building actual AGI. So while I don't think the chat GBT type systems are on the direct path, I mean, I think they're indirect evidence that we are probably not that far off from AGI. So I think I agree with Sam Altman. We could be at human level AGI in five years or something from now. I also won't be shocked if it's 15 years. I'll be shocked if it's 50 years. And what's cool is
How uncontroversial these statements are in so many circles are now, right? It's cool and it's scary, but certainly an exciting time to be doing this sort of research. So if you want to find out more about all this, there's my website, gertzel.org has links to a lot of things I'm doing. The website of my company singularitynet.io
As well as telling about our blockchain-based platform for running AI decentralized across a global network with no central controller, which I think is critical to the ethical rollout of AGI, but I didn't even have time to get into today. And now we all have to go to the beach and have a barbecue.
That was fascinating. Thank you so much, Ben. All right, so questions. So yes. Sometimes we put AGI as a high bar of what we're trying to achieve, but it's probably going to be pretty uneven. So in what ways will it exceed human intelligence? What is the likely scenarios? And will those areas be identifiable by humans? Well, I think that within
Let's say a couple years just to throw a concrete number out there. I think within a couple years of getting a true human level AGI, we will have massively superhuman AGI that will exceed humans in essentially every respective intelligence. So I mean I think once we have an AGI that can do computer science and math and computer programming, that can do the stuff the people in this room can do,
I see no reason it couldn't upgrade its code base and improve the algorithms underlying itself to make itself, say, 1.2 times as smart as it was initially. And then you lather, rinse, repeat. And this gentleman here wrote a paper on this some years ago. So in which ways the first AGI will exceed people is not obvious and could depend on what route you take, right?
If it came out of an approach with a symbolic logic engine in it, it's going to be way better at reasoning than people are. If it came out of a brain simulation, then it might not be better at reasoning than people are. But you can still feed more sensors into it than you can feed into a single human brain, so we'd get some added understanding that way. But no matter how you get there,
I think there's a recursive improvement loop you'll enter into, particularly when you consider you can make a large number of copies of this system, right? Like you have one smart human, okay, well then within reasonable amount of cost you have a hundred, maybe a thousand smart humans, but they can do direct brain-to-brain sharing of knowledge, right? So it's pretty easy to see how you get that recursive self-improvement. I mean you can't rule out there being some limit, but it seems
really outlandish that like there's a fundamental limit and only 1.5 times human intelligence to me that's like saying you'll never make something run more than one and a half times as fast as a cheetah or something that doesn't feel right.
I finally get to ask one question. I'm just, I'm curious because, you know, we think that AIs do have this recursive self-improvement capability, but when we're thinking about AI as a distributed environment with an ecosystem of different AI services and large language models and, you know, all kinds of entities controlled by who knows what, right? Certainly not aligned organizations. Why think that
The future brings this, you know, improvement in the intelligence level. Why not think of the future more in terms of what's been happening in bad scenarios with the amplification of discontent on Facebook, for example? Well, I think the recursive self-improvement at one level, it happened, I mean, in one sense, it happens on a different level than that, right? So when you can think about a large knowledge metagraph,
Like we have in OpenCog Hyperon and we have our own programming language which is called Meta, M-E-T-T-A, Meta Type Talk is the acronym. So we have our Meta language which basically interprets directly into nodes and links. And actually to model the semantics of that
We use the mathematics of the infinity groupoid from category three, which is equivalent to Wolfram's Ruliat that he was talking about. So I thought that was that was interesting that the same he uses this Ruliat structure built of all these hypergraphs.
The Ruliat is basically the infinity groupoid from category theory, although Ruliat is a wizzier name, and we use metagraphs which are like hypergraphs with a few extra features. So actually the self-rewriting, self-organizing data structure we're using in Hyperon is highly similar to this self-rewriting data structure he's using to model fundamental physics, although the
The statistics of the networks you see for modeling particles and objects are different than the statistics of networks you see if you're trying to model common sense knowledge, but there's not a contradiction. Those could be structures on different levels of the same network. So I mean at that level, the ability to self modify and self organize would like occur within the distributed network mind of a single open cog system or something. Now if you're talking about
across the whole planet, I mean, then you're basically, you're looking at two different scenarios before or after the AGI takes over the world, right? I mean, before the AGI takes over the world, you probably have a highly splintered off scenario, like right now where China is building its own networks, the US is building its own networks.
Rasha was trying to before they got distracted murdering people. On the other hand, what we're trying to do with SingularityNet is make it open and decentralized infrastructure for deploying AI. You think of things like the Internet or Linux, which are everywhere with no central controller. If the first AGI is rolled out like that,
It becomes like BitTorrent or something without the illegal copyright aspect. It becomes like it's all over the place. It's running on machines all over the place with different nodes and no one country has a monopoly of it. No one can stop it. But again, in the transitional phase, while we make the transition from narrow AI to AGI and then from the first inklings of AGI to
full-on super AGI, what happens to human society during that transitional phase is a very nasty and difficult question. What happens when 90% of human jobs are obsolete, but the super AGI hasn't yet created a molecular nano-assembler to airdrop into everyone's farm? Then the developed countries will give universal basic income to everyone.
and Africa will remain subsistence farmers with no work outsourced to them, then their kids who can computer hack will hack into the power grid in the West and wreak a lot of havoc. I think there can be quite difficult scenarios in the interim, yet I'm an optimist on the whole in that I think once you have an AGI that's several times human level intelligence,
Then it can just cut through all this. Then it's much smarter than we are. It can build its own robot factories to build new robot factories to create smarter AGI's. And paperclip factories maybe, right? Well, humans become like the squirrels in the national park, right?
I mean, they carry out their own love lives, they hunt, they fight, they build stuff, and the rangers don't try to interfere with their social lives, right? It's going to be fun talking to you more, Ben. I think we're on the same wavelength. Okay, so there were some earlier questions, starting with Garrett, and then Carla. So actually, I have a question, because you did bring it up a little bit, and I could talk a little bit about this, but I just want to ask about this, because you brought up A-Life,
I don't think we fundamentally need different hardware to get to human level AGI. I mean, unless
Unless we're all wrong that classical computing is good enough and you really need quantum computing, which I don't see evidence for, but I can't give it a zero probability. But I mean, by and large, from what we see in the brain and what we see with AI systems out there now, you don't need radically different hardware. But by the same token, you don't need GPUs to do neural nets either, right? You could do it all on CPUs. It just costs more. The thing is, a couple orders of magnitude extra cost and extra power consumption
Is this sort of practical obstacle that can delay something by decades?
But, I mean, in the scope of history, delaying by decades doesn't matter either, right? That's kind of gets to one of my questions to go to, right, which is there's a difference between achieving the goal of AGI with the hardware, despite the outrageous energy cost of mining the core of the planet, to realize this kind of, you know, whatever magnitude order of intelligence greater system.
It seems like with the hardware Rachel and I are working on, you can speed up the operations of systems like OpenCog or biologically realistic neural nets by at least like a couple orders of magnitude. So if you can speed things up by between a hundred and a thousand times, very helpful. So if you think about say a GPG-3 model cost
$5 million, $10 million to train. Well, if you didn't have GPUs, let's just for sake of argument, say it took 100 times longer, right? I mean, so then instead of 10 million, it's a billion dollars. But I mean, there's companies, these companies have a billion dollars, right? So I mean, and, and now, now open their eyes getting $29 billion, right? But the thing is, no one wanted to give them $29 billion before they spent the 10 million, right? So it just, it's the higher cost,
We'll slow things down and making chips that can speed things up by a hundred or five hundred times. I mean, obviously we'll save time off it, but I don't think it's a really fundamental necessity.
If quantum computing were needed, that would be more like a fundamental necessity. I mean, you could of course simulate the Schrodinger equation on a classical computer, but then you're getting into like many, many, many orders of magnitude slowdown that becomes infeasible.
Yeah, thank you. So there are three alternatives towards AGI. One of them is one in which artificial general intelligence emerges from simulating the brain in a kind of individualistic, isolated manner. And the other ones are more like
My question is the role of social intelligence in the development of our general intelligence.
independent of the
tribe and put them in robots or in virtual characters in a game world and let them let them buzz around and do things and certainly with with open cog systems I didn't go into this but we're we're looking at exactly that we're looking at using open cog hyper on systems to control humanoid agents in a 3d virtual world and I want to get them to collaborate to build stuff and what one experiment I'm very interested in is seeing if you can get a collection of open cog agents
to invent their own language to communicate with each other about about about building stuff right so I mean there's you could do social intelligence things really in in any of these paradigms I think a difference is in the sort of neural symbolic evolutionary approach there's more ways to cheat by injecting knowledge into the system's brain like you can inject databases and we we're working on ways to take
All the knowledge from a large language model, like a GPT-4, turn it all into huge predicate logic statements. They can then be fed into a logic engine. If I had a trillion predicate logic statements, which are useful knowledge, I don't know how to feed them into a brain simulation or an A-life system. I do know how to feed them into an open cog system.
That's a difference. How easily can you cheat and inject knowledge? But social intelligence, I think you can do in any of these paradigms and may be a critical thing to do. I mean, we want to experiment with it.
I think Misha had a question. Yes, I just have a quick question. You just mentioned the having different agents and I wanted to hear more about the importance maybe to you whether you consider that important or not of agency and agent in environment in the context of intelligence. I think it's important to human-like intelligence
that we evolved to control these bodies which are solid objects that move around in an environment comprised largely of other solid objects and that we're sharing this environment with other similar looking objects that seem to do kind of the same thing things as us and that we we need to do things collaboratively with them. I mean clearly if you want to ask like what's the
prior distribution over observations and actions, characterizing human intelligence, a lot of it has to do with embodied communication, like shared communication with others with similar bodies in a similar world. Whether that's fundamental to AGI in general, perhaps not, but it's pretty fundamental to human-like general intelligence. Parker had a question.
This is awesome. Thanks. And I'm sure you've talked about this in your other talks. Can you speak to the ratio between the neural net, symbolic and evolutionary? Is one more important than the others? Is it a split or is that proprietary that you can't talk about? I mean, I think in the end, any of these three paradigms could build a very powerful AGI with enough resources.
On the other hand, with 50 lines of Lisp code, you can implement what Marcus Hutter called AIXITL, which could have arbitrarily high intelligence using enough resources. So that's not that strong a statement. I think, yeah, they're each important for different things. And I think you can build a variety of different systems that weights each one more highly if you wanted to, actually. So you could make a system that was mostly a transformer neural net,
with just adding little bits of symbolic reasoning on to stop the Transformer neural net from being too inconsistent with itself or you can make a system that's mostly a symbolic reasoning engine and just outsources low-level pattern recognition and synthesis to a neural net and these would just be different flavors of minds that you know they might differ more than
autistic person from an average person, but there'd be different variations of the same mind architecture. As I'm thinking of it by default, what I'm doing now, this symbolic probabilistic reasoning engine is sort of the core that everything feeds into and we're extracting knowledge from large language models and then we're using neural nets for perception and action. We're using evolutionary learning
to come up with new ideas that are then validated or rejected by the by the reasoning engine. But on the other hand, the software plumbing could sort of be used to make a variety of different systems that weight different components more highly so that I think that will in large measure be
an evolutionary, an experimental question, right? But it's also, it's a bit like asking how important is the attention mechanism inside the transformer, right? Because, I mean, it's there. A lot of other things are there too, and they all have to work together. So, Elon had a question.
Do we have a definition of AGI, right? What do we mean by AGI? Do you have a working definition? It sounds like you have a sort of complementary, which many people do. You know it when you don't see it, right? In some sense you haven't seen that thing, that creative spark, for example. I mean, I think I talked about this in the beginning of the talk. I think there are definitions of
AGI in a broad sense, and Marcus Hutter got the ball rolling on that in his book Universal AI. I mean, Shane Legge's PhD thesis, Shane Legge went on to found Google DeepMind, his PhD thesis gives an algorithmic information theory based definition of general intelligence.
So that's there. So then what we mean by a working definition, I guess, is a definition of human-level AGI. And there isn't a really crisp definition there. And I think that makes sense because it's, I mean, biology and psychology are not so crisp and elegant, right? Like human-level intelligence is just whatever humans happen to evolve about
It's just whatever we happen to evolve to do by now. I think one thing that chat GPT highlights is that being able to do like 80% of what 80% of people do, like most of the time, still
is radically not the same thing as fundamental human level, human level general intelligence. And this is why I said, well, what about the five year turing test, right? Like if I, if I, which kind of ties in with the robot college test, right? So I think, I think what that what that's leaning toward is you would want a system to be able to, over a multi year period,
grow its knowledge from starting point to new knowledge at the end of that multi-year period, at least as well as a human can. And that's different than just having the specific capabilities of a human at any one point in time. So that's something I would definitely look at in practice. If we had a system with a certain level of knowledge and intelligence,
And we didn't upgrade the software, right, but we interacted with and taught the system things, led experiment with things, and it gained vast amount of new capabilities through its own experimentation with the world, including gaining new domains of knowledge, right, like a person does. That would certainly impress me, and that's something that every little kid does, and certainly no
Fine-tuning, I mean that's exactly what it's doing when you give it new information It learns that new information and corporates it's able to do It is not at all gaining fundamentally new skills and won't represent in its training database. No You can learn to go logic system. I can teach a new predicates new functions. It depends what you mean by a fundamentally different skill Yeah, I mean if I mean if there was no for example a young child a
who has never played any musical instrument. You can teach to play a musical instrument, which is different than someone who has mastered ten musical instruments, learns to master the eleventh musical instrument. There's certainly a difference there. It can learn a new language, but it knows a lot of languages already, so that's not
I think now what we need to do is thank Ben very much and thank all of the speakers for today. That was a really interesting day.
All right. Well, after immersing yourself in this encouraging and somber lecture from the Center for the Future Mind, you may be eager to explore more of the astounding and troublesome developments in artificial intelligence. To satiate your curiosity, I invite you to browse through the accompanying playlist, which not only offers a deeper insight into the implications of these breakthroughs, but also sheds light on measured aims at regulating their continued growth. Subscribing allows you to be privy to the upcoming panel discussions exploring non-human consciousness in babies and animals,
Slated to debut approximately one week from now. Take care New announcement the patreon as well as theories of everything org the membership gives you access to personally curated detailed summaries of specific episodes the most recent Steven Wolfram lecture on chat GPT as well as this Ben Gortzel episode replete with references to each book mentioned theorems when they come up at play by play bullet points of conclusions and statements by the guests
Only select episodes will have this feature, so you'll be able to vote on which episode you want most. Because this takes a considerable amount of time, this is my minor way of saying thank you for supporting the Toe Podcast by giving you something edifying to read along or review afterward. Again, that's by signing up at patreon.com slash Kurtjmungle. Or if you don't like that website, then there's theoriesofeverything.org.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 8576,
"audio_duration_seconds": 4709.21,
"completed_at": "2025-12-01T01:10:03Z",
"segments": [
{
"end_time": 26.203,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
},
{
"end_time": 53.217,
"index": 1,
"start_time": 26.203,
"text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
},
{
"end_time": 64.514,
"index": 2,
"start_time": 53.575,
"text": " As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
},
{
"end_time": 90.469,
"index": 3,
"start_time": 66.152,
"text": " AGI in five years or something from now. What's cool is how uncontroversial these statements are in so many circles are now, right? We will have massively superhuman AGI that will exceed humans in essentially every respect of intelligence. So I mean, I think what happens to human society during that transitional phase is very nasty and difficult."
},
{
"end_time": 110.196,
"index": 4,
"start_time": 91.783,
"text": " Today we talk about large language models, artificial general intelligence, and merging with machines. Ben Gortzel is a computer scientist, a mathematician, and an entrepreneur. He's the founder and CEO of Singularity Net, and his work focuses on AGI, which aims to create truly intelligent machines that can learn, reason, and think like humans."
},
{
"end_time": 137.944,
"index": 5,
"start_time": 110.196,
"text": " This was another talk that was conducted at the Center for the Future Mind at the MindFest conference at the beautiful beach of Florida Atlantic University. Expect to hear more discussions on AI by speakers like Gortzel. Gortzel will also appear with David Chalmers on an upcoming episode, as well as David Chalmers on his own, as well as Wolfram on his own, as a part two to the lecture that he's already given. Part one for Wolfram is listed here. This will occur over the next few days and weeks on the Theories of Everything channel."
},
{
"end_time": 166.852,
"index": 6,
"start_time": 137.944,
"text": " As usual, links to all are in the description. I would like to thank Susan Schneider, Professor of Philosophy, because this conference would not have occurred without her spearheading it, without her foresight, without her inviting theories of everything as the exclusive filmer of the event, as well as we're going to host a panel that will be on later with David Chalmers and Susan Schneider. I also would like to thank Brilliant for being able to defer some of the traveling costs. Brilliant is a place where there are bite-sized interactive learning experiences for science, engineering, and mathematics."
},
{
"end_time": 189.411,
"index": 7,
"start_time": 166.852,
"text": " artificial intelligence in its current form uses machine learning which uses neural nets often at least and there are several courses on brilliance websites teaching you the concepts underlying neural nets and computation in an extremely intuitive manner that's interactive which is unlike almost any of the tutorials out there they quiz you i personally took the course on random variable distributions and knowledge and uncertainty"
},
{
"end_time": 209.94,
"index": 8,
"start_time": 189.411,
"text": " because I wanted to learn more about entropy, especially as there may be a video coming out on entropy, as well as you can learn group theory on their website, which underlies physics, that is SU3 x SU2 x U1 is the Standard Model Gauge Group. Visit brilliant.org slash TOE to get 20% off your annual premium subscription."
},
{
"end_time": 222.619,
"index": 9,
"start_time": 209.94,
"text": " As usual, I recommend you don't stop before four lessons. You have to just get wet. You have to try it out. I think you'll be greatly surprised at the ease at which you can now comprehend subjects you previously had a difficult time grokking."
},
{
"end_time": 252.858,
"index": 10,
"start_time": 223.2,
"text": " Thanks for inviting me and it's a really, it's a"
},
{
"end_time": 267.09,
"index": 11,
"start_time": 253.387,
"text": " Fun time to be at a conference of this nature with all the buzz around the AI and intelligence and AGI and so forth. I mean as Rachel alluded"
},
{
"end_time": 291.8,
"index": 12,
"start_time": 268.131,
"text": " I've been doing this stuff for a while, like many of the speakers here. I mean, I did my PhD in math. I did my PhD in math in the late 80s, but even at that point I was interested in AI and the mathematics of mind and in implementing the mathematics of mind in computers. And of course, most people in"
},
{
"end_time": 315.964,
"index": 13,
"start_time": 292.278,
"text": " This room, no, though most people on the planet do not, that the AI field was already quite old by the 1980s, right? I mean, now, you know, in the Uber ride over here, I told the lady driving the Uber, I was going to a conference on basically"
},
{
"end_time": 334.616,
"index": 14,
"start_time": 316.63,
"text": " Machines that think like people how to make machines that machines think like people. I mean she obviously had no tech background I said I've been working on this since the 80s. So first of all, she's like oh I had no idea people working on this that long ago. Secondly, she's like but I"
},
{
"end_time": 364.104,
"index": 15,
"start_time": 335.213,
"text": " I thought that had already been done in machines because I already think like people, right? So I mean her assumption is that it had already been solved and was running in the background making some billionaires money and like that was just the state of things, right? So, but it's interesting. That certainly wasn't the case five or ten years ago, right? But I think, you know, folks in this room are aware that work on these topics has been going on"
},
{
"end_time": 390.179,
"index": 16,
"start_time": 364.838,
"text": " a very long time and also that many perhaps almost all of the core ideas underlying what's happening in AI now are also fairly old ideas which have been improved and tweaked and built on of course as better and better computers and more data and better network and so on have allowed"
},
{
"end_time": 417.773,
"index": 17,
"start_time": 391.681,
"text": " allowed implementation of things just at a larger scale, thus experimentation at a larger scale. So, you know, it's been fascinating to hear talks on fundamental nature of consciousness and consciousness in babies and then organoids and then on the structure and dynamics of the physical universe being addressed"
},
{
"end_time": 445.026,
"index": 18,
"start_time": 418.234,
"text": " using data structures and dynamical equations really more characteristic of AI and computer science than the physics. So there's clearly fascinating sort of convergence and cross-pollination going on with biology, psychology, physics, computer science, math. I mean, more and more everything is coming together. What I want to focus on in my talk"
},
{
"end_time": 473.387,
"index": 19,
"start_time": 446.459,
"text": " today is what I think are viable paths to get from where we are now, where we have machines that can fool the random Uber driver into thinking that they are human-like general intelligences. How do we get from where we are now to machines that actually are human-level general intelligences and"
},
{
"end_time": 492.927,
"index": 20,
"start_time": 474.206,
"text": " I believe shortly after that we will have machines that are vastly greater than human-level general intelligences. I'll say a little bit about that, but I'm going to focus more on the path from here to human-level AGI."
},
{
"end_time": 520.64,
"index": 21,
"start_time": 493.66,
"text": " Few preliminaries before I get into that. I'll talk a little bit about these GPT type systems which are sort of the order of the day and a little bit about how I connect intelligence with consciousness. I'll try to go through these huge topics fairly rapidly and then move on to approaches to engineering AGI. So I think regarding"
},
{
"end_time": 546.271,
"index": 22,
"start_time": 521.493,
"text": " Chat GPT, other transformer neural networks, a bunch of correct and interesting things have been said here already. I mean, I was also impressed and surprised by some aspects of the function of these systems. I was also not surprised that they lack"
},
{
"end_time": 559.514,
"index": 23,
"start_time": 546.852,
"text": " certain sorts of fundamental creativity and the ability to do sustained precise reasoning. And I think, you know, while I was surprised at just how well"
},
{
"end_time": 582.739,
"index": 24,
"start_time": 559.923,
"text": " Chat GPT and similar systems can sort of bullshit and bloviate and write college admission essays and all that. In a way, I'm not surprised that I'm surprised because I know that my brain doesn't have a good intuition for what you do when you take the entire web and feed it into a database and put some smart look up on top."
},
{
"end_time": 602.619,
"index": 25,
"start_time": 582.739,
"text": " Just like I know my brain is bad at reasoning about the difference between a septillion and a nonillion. We're not really well adapted to think about some of these things that we don't come across in our everyday lives. So even now, if you ask ChatGBT to compose a poem, and it does, I don't have a good intuitive sense for"
},
{
"end_time": 627.278,
"index": 26,
"start_time": 602.944,
"text": " How many poems of roughly similar nature are on the web and were fed into it? You could do that archaeology with great time and effort by putting probes into the network while it does certain queries, but that's a lot of work. What's intriguing to me as someone who's written and thought a lot about general intelligence is these systems achieve what"
},
{
"end_time": 650.589,
"index": 27,
"start_time": 628.302,
"text": " appears like a high level of generality relative to the individual human, right? They're very general compared to an individual human's mind, but they achieve that mostly by having a very, very broad and general training database, right? They don't do big leaps beyond their training database, but they can do what appears very general to an individual human and"
},
{
"end_time": 680.759,
"index": 28,
"start_time": 651.169,
"text": " without making big leaps beyond their training database, because their training database has the whole fucking web in it, right? I mean, that's a very interesting thing to do. It's a cool thing to do. It may be a valuable thing to do, right? It may be that using this sort of AI, not GPT in particular, but large language models, transformer neural nets of this character, done cross-modally, integrated with reinforcement learning, I mean, it may be that with this type of AGI,"
},
{
"end_time": 701.425,
"index": 29,
"start_time": 681.254,
"text": " Excuse me. It may be that with this type of narrow AI system, even falling short of general intelligence in the human sense, I won't be shocked if that ultimately absolutes like 95% of human jobs, right? I mean, there's a lot more work to be done to get there, of course, and many jobs involve"
},
{
"end_time": 725.077,
"index": 30,
"start_time": 702.09,
"text": " physical manipulation and integrating LLMs with robots through physical manipulation is hard as we see from these robots here like this dog whose head just fell off and Sophia who's on a tripod, although she's been on wheels and legs at various times. So there's a lot of engineering work, there's a lot of training and tuning work, but fundamentally I wouldn't be shocked if"
},
{
"end_time": 753.404,
"index": 31,
"start_time": 725.52,
"text": " Very high percent of jobs that humans now get paid to do could be could be obsolete by this sort of technology. And I mean, there are going to be jobs like a preschool teacher or hospice care therapist that you just want to be done by a person because it's about it's about human to human connection, just like we see live music, even though recorded music may sound better, because we like being in the room with people, other humans playing music, right, but"
},
{
"end_time": 760.52,
"index": 32,
"start_time": 754.104,
"text": " That's a minority of jobs that people do, and there are also jobs doing things that"
},
{
"end_time": 790.623,
"index": 33,
"start_time": 761.237,
"text": " These sorts of neural nets, I think, will never be capable of, and I'll come to that in a moment, though I think different sorts of algorithms could do it. But it's not a big percent of human jobs, right? So one lesson to draw here is almost everything people get paid to do is just rote and repetitive recycling of stuff that's already been done before, right? So if you feed a giant neural net with a lot of examples of everything that's been done before that can then pick stuff out of this database and merge them together judiciously,"
},
{
"end_time": 819.667,
"index": 34,
"start_time": 791.049,
"text": " Okay, you eliminate most of what people get paid to do. It takes a little bit of time to roll this out in practice, not necessarily that long. I know some friends of mine who were from the AGI research community started a company called Apprenti maybe four or five years ago. They started out wanting to build AGI. Their VC investors channeled them, as would be the usual case, into doing some particular application instead. What they ultimately did was automate the McDonald's drive-through."
},
{
"end_time": 849.531,
"index": 35,
"start_time": 820.35,
"text": " They sold the company to McDonald's maybe two and a half years ago. It's now their technology is starting to get rolled out in some real McDonald's around the world, right? So you're getting that guy who sits behind the drive-through window listening to stuff over that noisy horrible microphone like give me a Big Mac and fries, hold the ketchup, right? So they're finally automating away these people which"
},
{
"end_time": 856.237,
"index": 36,
"start_time": 850.589,
"text": " What's one thing that's interesting to me there is to see from that technology being shown to work"
},
{
"end_time": 885.981,
"index": 37,
"start_time": 856.749,
"text": " to it actually being deployed across all the McDonald's is taking at least five years, right? So I mean it's obvious that could be automated to me a long time ago. It was shown two and a half years ago that it could work on some McDonald's, but it's still not rolled out everywhere. It's rolled out in certain states, right? But then even like replacing the guy pushing the hamburger on the cash register with a touchscreen where you push the hamburger yourself, like even that's taking a long time to get rolled out."
},
{
"end_time": 906.476,
"index": 38,
"start_time": 887.125,
"text": " You know, these practical transitions will take a while. They're really, really interesting, but there's some things I think are held back not by practical issues, but by fundamental limitations of this sort of technology. In essence, I think these are anything that intrinsically requires"
},
{
"end_time": 936.578,
"index": 39,
"start_time": 908.2,
"text": " Taking a big leap beyond everything you've seen before and this sort of gets it the fundamental difference between what I think of as narrow AI and what I think as AGI. What I think of as AGI, artificial general intelligence, which is the term I introduced in 2004 or something in an edited book by that name from Springer. And this refers to a system"
},
{
"end_time": 962.483,
"index": 40,
"start_time": 937.244,
"text": " that has a robust capability to generalize beyond its programming and training and its experience and sort of take a leap into the unknown and that you know every baby does that, every child does that. I mean I have a five-year-old and a two-year-old now and three grown kids and every one of them has made an impressive series of wild leaps into the unknown like as they learn to do stuff that we all consider"
},
{
"end_time": 987.329,
"index": 41,
"start_time": 963.456,
"text": " Basic. Now that doesn't mean an AI system couldn't do the same things a two and five-year-old can do without itself making a leap into the unknown. It can do it by watching what a billion two-year-olds did and interpolating, but kids still do that. In terms of job functions that adults do, I mean, doing impressive science"
},
{
"end_time": 1016.681,
"index": 42,
"start_time": 988.422,
"text": " Almost always involves making a leap into the unknown. I mean, there's a bunch of garbagey science papers out there. But if you look at the Facebook Galactica system, which was released and then retracted by Facebook, I mean, a large language model for generating science papers and such, you can see the gap between what large language models can do now and even pretty bad mediocre science. Like what Galactica sped out was pretty much science looking gibberish. Like it, you know, it sped out like"
},
{
"end_time": 1045.384,
"index": 43,
"start_time": 1017.159,
"text": " you ask it, tell me about the Lenin-Ono conjecture and it will split out some trivial identity of set theory invented by John Lenin and Yoko Ono and it's amusing but it's not able to do science at the level of a mediocre master student, right, let alone a really strong professional researcher and I mean the part of the core reason there"
},
{
"end_time": 1075.179,
"index": 44,
"start_time": 1046.032,
"text": " is about taking a step beyond what was there. It's specifically not about just recombining in a facile way what was there before. So writing an undergrad essay for like English 101 kind of is about making a facile recombination of what was there before. So that's already automated away and we have to find other ways to attempt to assess undergrad students, right? So I mean in music I would say synthesizing like a new 12 bar blues song"
},
{
"end_time": 1099.121,
"index": 45,
"start_time": 1077.278,
"text": " There's no release system that can do that now but I'm sure it's coming in the next few years and some folks on my team in Singularity Net are working on that too. I mean Google's model music LM goes part way there but it's not it's not released it's clear how to do better. On the other hand think about Think Verizon the best 5G network is expensive? Think again. Bring in your AT&T or T-Mobile bill to a Verizon store"
},
{
"end_time": 1130.708,
"index": 46,
"start_time": 1104.121,
"text": " If you fed a large language model or other comparable neural net with all music"
},
{
"end_time": 1148.916,
"index": 47,
"start_time": 1131.271,
"text": " Composed up to the year 1900, let's say, just supposing you had it on the database. Is it going to invent jazz? Is it going to invent progressive jazz? Is it going to invent rock? You could ask it, let's put West African drumming together with Western classical music and church hymns. It's going to give you"
},
{
"end_time": 1167.619,
"index": 48,
"start_time": 1149.497,
"text": " Mozart and swing low sweet chariot with a West African polyrhythmic beat, which may be really cool, but it's not going to bring you to Charlie Parker and John Coltrane, Jimi Hendrix and Spongel or whatever else. It's just not going to do it."
},
{
"end_time": 1197.466,
"index": 49,
"start_time": 1168.097,
"text": " There's a sense in which this sort of creativity is combinatorial, right? I mean, jazz is putting together West African rhythms with Western church music and chord progressions, and rock is drawing from jazz and simplifying it and so forth. But that type of combination that's being done when humans do this sort of fundamental creativity, it's different than the type of combination the chat GPT type system is doing, which really has to do"
},
{
"end_time": 1225.64,
"index": 50,
"start_time": 1198.046,
"text": " with how knowledge is represented inside the system. So I think these systems can pass the Turing test. They may not quite yet, but I think they can certainly, if you're talking about fooling a random human that it's a human, it can probably already do that. I suspect without solving the ATI problem you could create a system that would fool me or anyone into thinking it was a human in a"
},
{
"end_time": 1242.466,
"index": 51,
"start_time": 1226.169,
"text": " in a conversation because so many conversations have been had already and people aren't necessarily that clever either, right? I don't think these systems could pass a five-year long Turing test because I could take a person of average intelligence"
},
{
"end_time": 1268.968,
"index": 52,
"start_time": 1242.91,
"text": " And I could teach them computer programming and a bunch of math. And I could teach them to build things and so on over a long period of time. And I don't think you could ever teach GPT-4 or chat-GPT in that sort of way. So if you give me five years with a random human, I could turn them into a decent AI programmer and electronics engineer and so forth. And that goes beyond, right? But Alan Turing didn't define the Turing test."
},
{
"end_time": 1294.991,
"index": 53,
"start_time": 1269.36,
"text": " as five years long. He defined it as a brief chat. But of course, he wasn't imagining what would happen if you put all of the web into a lookup table either, right? Because he was very smart, but that was a lot to see at that point in time. So I think, I mean, another example of something I think this sort of system wouldn't be able to do"
},
{
"end_time": 1323.166,
"index": 54,
"start_time": 1295.64,
"text": " Let's say business strategy or, you know, political policy planning at the high level because they're the nature of it is you're dealing with the world that's changing all the time and always throwing something new and weird at you that you didn't expect. And if you're really just recombining previous strategies, it's not a terrible thing to do, but it's not what has built the greatest companies. What's built the greatest companies is"
},
{
"end_time": 1350.179,
"index": 55,
"start_time": 1323.763,
"text": " You know, pivoting in a weird way, making elite beyond experience. So, I mean, there certainly are things humans do that go beyond this sort of facile, large-scale pattern matching and pattern synthesis, but it's interesting how far you have to reach to find them. On the other hand, it does mean if you had a whole society of chat GPTs,"
},
{
"end_time": 1379.411,
"index": 56,
"start_time": 1350.674,
"text": " It will never progress, right? I mean, and some people might like that better, but it would genuinely be, it will be stuck, stuck now in its shallow derivations of where you could get from now, right? It's never going to make another, it's not going to launch the singularity. And it's, there's a lot of other smaller things in that it's not going to do. So I dwelt on that a bit, partly because it's topical, but partly because I think it frames the discussion on"
},
{
"end_time": 1406.067,
"index": 57,
"start_time": 1380.947,
"text": " general intelligence reasonably well in the sense that it highlights quite vividly what isn't a general intelligence, right? Now, what is an intelligence is a bigger and subtler question, obviously, and I'm going to mostly bypass"
},
{
"end_time": 1429.548,
"index": 58,
"start_time": 1406.886,
"text": " Problems of consciousness that were discussed here this morning not not because they're not interesting, but they're just that's a whole whole subject area in itself, and I don't have that much time, but I mean I'm Fundamentally I'm somewhat panpsychist in in orientation. So I mean I tend to think that you know this microphone is"
},
{
"end_time": 1457.278,
"index": 59,
"start_time": 1430.128,
"text": " has its own form of consciousness and I don't care much if you want to call it consciousness or proto-consciousness or blah blah blah or whatever but I think that the essence of what it is to experience to me is just imminent in everything and it does manifest itself differently in the human brain than in a microphone and how similarly it will manifest itself in a human level AGI from a biological brain"
},
{
"end_time": 1483.695,
"index": 60,
"start_time": 1457.841,
"text": " It's a very interesting question and it probably depends on many aspects of how similar that AGI is to the human brain. So like what's the continuity between structure and dynamics of a cognitive system and the experience felt by that system? Is a small change in structure and dynamics obviously do a small change in the felt experience? There's a lot of fascinating"
},
{
"end_time": 1499.65,
"index": 61,
"start_time": 1484.206,
"text": " Subtle questions there, which I'm going to punt on for now. We can give another talk about that. That's another time but What is intelligence? It's a slightly more interesting and relevant question. I also think not such a critical one like"
},
{
"end_time": 1527.142,
"index": 62,
"start_time": 1500.316,
"text": " Fussing about what is life is not much of what biologists do. And you can do a lot of progress in synthetic biology without fussing around about what is life and worrying about whether a virus really is or isn't alive. Like who really cares? It's a virus. It's doing its thing. It has some properties we like to call lifelike. It lacks some others. And synthetic biology systems, each of them may have some properties we consider lifelike and lack some others. That's fine."
},
{
"end_time": 1558.609,
"index": 63,
"start_time": 1528.66,
"text": " But there's still something to be gained by thinking a little bit about what is intelligence, what is general intelligence. Marcus Hooter had a book called Universal AI published in 2005 or something. He proposed a formalization of intelligence which basically is the ability to achieve computable reward functions in computable environments."
},
{
"end_time": 1585.742,
"index": 64,
"start_time": 1558.899,
"text": " You have to weight them, so you're averaging overall reward functions in all environments and what he does is he weights the simpler ones higher than the more complex ones and this leads to a bunch of fairly vexed questions of how do you measure simplicity and how equivalent is it to transfer between one measure of what environments rewards are simpler than others. But one thing that's very clear when thinking about this sort of definition of intelligence is"
},
{
"end_time": 1612.739,
"index": 65,
"start_time": 1586.152,
"text": " Humans are pretty damn stupid. We're very bad at optimizing arbitrary computable reward functions in arbitrary environments. For example, rounding a 708-dimensional maze is very hard for us, which is not even a complex thing to formalize. We learn to run a 2D maze, 3D maze maybe. Beyond that, most people become very confused, but then"
},
{
"end_time": 1638.37,
"index": 66,
"start_time": 1613.217,
"text": " I mean, in the set of all computable environments and reward functions, there may be far more higher dimensional mazes than two or three dimensional mazes, depending on how you're weighting things, right? Let alone fractal dimensional mazes. I mean, so there's a lot of things we're just bad at. We come out very dumb by that criterion, which may be okay. We don't have to be the smartest systems in the universe. An alternate"
},
{
"end_time": 1660.589,
"index": 67,
"start_time": 1638.933,
"text": " A more philosophically deep way of thinking about intelligence was given by my friend Weaver, aka David Weinbaum, in his PhD thesis from the Free University of Brussels, which was called Open-ended Intelligence. He's going back to continental philosophy in the Deluse and Qatari and so forth. He's looking at"
},
{
"end_time": 1679.326,
"index": 68,
"start_time": 1661.049,
"text": " Intelligent systems as a complex self-organizing system, which is driven by the dual complementary and contradictory drives of individuation, trying to maintain your boundary as a system, which relates somewhat to autonomy as it was discussed earlier today, but is I think more clearly defined."
},
{
"end_time": 1706.715,
"index": 69,
"start_time": 1679.718,
"text": " Individuation and then self-transcendence, which is basically trying to develop so that the new version of yourself, while connected by some continuous thread with the older version of yourself, also has properties and interactions that the old version of yourself could never understand. Of course, all of us have both individuated and self-transcended over the course of our human lives. Human species has also."
},
{
"end_time": 1724.48,
"index": 70,
"start_time": 1707.363,
"text": " This doesn't necessarily contradict Marcus Hutter's way of looking at it. I mean, you could say through the iterated process of individuation and self-transcendence, maybe we've come to be able to optimize even more reward functions and even more environments, right?"
},
{
"end_time": 1750.077,
"index": 71,
"start_time": 1725.179,
"text": " All these abstract ways of looking at things don't really give us a way to tell how much smarter a human is than an octopus, or how smart a chat GPT is relative to Sophia, or exactly how far we've progressed toward AGI. I think all these theoretical considerations have a lot of mathematical moving parts that are quite abstract."
},
{
"end_time": 1768.558,
"index": 72,
"start_time": 1750.981,
"text": " In practice what we see is most people will give credit to chat GPT for being human level AGI even though experts can see it isn't. I had posed a while ago what I called the robot college student test where I figured if you had a robot say a couple dot versions ahead of this one"
},
{
"end_time": 1797.517,
"index": 73,
"start_time": 1769.104,
"text": " You have a robot that can go to, let's say, MIT, do the same exact things as a student, roll around the classes, sit and listen to the assignments, take the exams, do the programming homework, including group assignments, and then graduate. And then I figure then I'm going to accept that thing is, in effect, a human level general intelligence. And I mean, I'm not 100% on that, so I might be able to hack that. But you can see the university is set up"
},
{
"end_time": 1827.944,
"index": 74,
"start_time": 1798.302,
"text": " It is set up precisely for that purpose, right? I mean it's set up to teach, it's set up to teach a science university especially, it's set up to teach the ability to do science, which involves leaping beyond what was known before, and it's set up to try to stop you from cheating also, right? So I mean I'm assuming that the robot isn't going to class and cheating by like sending 4G messages to some scientists in Azerbaijan or something, but like going"
},
{
"end_time": 1843.336,
"index": 75,
"start_time": 1828.473,
"text": " going through it in a genuine way. But again, that sort of test, you could argue about the details, it's measuring human-like general intelligence. I mean, it's very clear you could have a system"
},
{
"end_time": 1872.756,
"index": 76,
"start_time": 1843.916,
"text": " that's much, much smarter than people in the fundamental sense, but misses social cues so that it wouldn't do well in group assignments in college or something. And you can see that from the fact that there are autistic geniuses who are human and would miss social cues and do poorly in group assignments. And they're still within the scope of human systems. So I'd say fundamentally, you know, articulating what is intelligence"
},
{
"end_time": 1879.701,
"index": 77,
"start_time": 1873.387,
"text": " It's an interesting quest to pursue. I'm not sure we've gotten to a final"
},
{
"end_time": 1910.026,
"index": 78,
"start_time": 1880.162,
"text": " consensus on what is intelligence that bridges the abstract to the concrete. I'm not sure that we need to. It's pretty clear we don't need to. Like we could make a breakthrough to human level AGI and even superhuman AGI and we still haven't pinned down what is intelligence. I mean just as I think we could do synthetic biology to make weird new freakish life forms come out of the lab without having a consensus on like fundamentally what is life."
},
{
"end_time": 1931.834,
"index": 79,
"start_time": 1910.674,
"text": " I don't see any reason there's one true golden path. I mean, I think a well-worn but decent example"
},
{
"end_time": 1957.79,
"index": 80,
"start_time": 1933.166,
"text": " is manned flight. I mean, you've got airplanes, you got helicopters, you got you got spacecraft, you got you got blimps, you've got probably ways of flying that we have pedal powered flight machines, you probably many things that we haven't, we haven't thought of rid that right. I mean, so I mean, there you add the fundamental principles of aerodynamics and fluid mechanics. And when you know that you can figure there's a lot of different ways to fly. And I think"
},
{
"end_time": 1984.616,
"index": 81,
"start_time": 1959.838,
"text": " There's going to be a lot of different ways to make human-level general intelligence. Some will be safer than others, just like blimps blew up more than other modes of flying. Some will be easier to evolve further and further. Some ways of flying in Earth's atmosphere are more easily evolved than the ways of flying into space. How the air balloon doesn't turn into a spacecraft"
},
{
"end_time": 2013.643,
"index": 82,
"start_time": 1985.162,
"text": " As well as you could you could take an airplane and sort of morph that to make it a spacecraft but So I think there's going to be multiple different routes and I'm going to briefly mention now Three routes that I think have actual promise one of which is what what I'm currently mostly working on so the first route I think has actual promise is"
},
{
"end_time": 2040.879,
"index": 83,
"start_time": 2014.667,
"text": " actually trying to simulate the brain and again the people in this room are among the small percent of the human population who realize how badly current computer science neural nets fare if you think of them as brain simulations. I mean the formal neuron embodied by some threshold function bears very little resemblance to a biological neuron and"
},
{
"end_time": 2071.288,
"index": 84,
"start_time": 2041.374,
"text": " Even if you want to look at equation models, you have like Izhekevich's chaotic neuron model. I mean, you have Hodgkin-Huxley equation. I mean, you have mathematical models of a neuron that also aren't quite right, but they at least try. And what's inside current computer science neural nets don't try. Then you have astrocytes, glia, all these other cells in the brain that are known to be helpful with memory. You have all this chemistry in the brain. You have extracellular charge diffusion through the extracellular matrix in the brain, which gives you EEG waves."
},
{
"end_time": 2100.077,
"index": 85,
"start_time": 2071.971,
"text": " You've got a lot of stuff in the brain we don't understand that well and we're not modeling in any computer science neural net system. You also have a few cases known of wet quantum biology doing stuff in the brain and how relevant they are to thinking in the brain is an unknown. But even without going quantum we don't know enough about the brain to make a real"
},
{
"end_time": 2129.821,
"index": 86,
"start_time": 2100.657,
"text": " computational brain simulation. There's no reason we couldn't. I had a devious plan for this involving Sophia which hasn't taken off yet. So what I planned is to make her a big fashion star so that having a transparent plate on the back of your head was viewed as highly fashionable. Then you get people to remove the back of their skull and replace it with a transparent plate like Sophia has because it looks really cool."
},
{
"end_time": 2159.838,
"index": 87,
"start_time": 2130.128,
"text": " But then once people have that transparent plate, then you can put like 10,000 electrodes in there and measure everything that's happening in their brain while they go about their lives in real time. With that sort of data, you might be able to make a crack at really doing a biological simulation of the brain. And hopefully someone invents a less hideous and invasive mode of brain measurement, right? I mean, things like fMRI and PET are incredible physics technologies."
},
{
"end_time": 2183.148,
"index": 88,
"start_time": 2160.435,
"text": " I feel like if we got another incredible physics technology to scan the dynamics of the brain with high spatial and temporal precision, we might gather the data we need to make a real brain simulation and, you know, brain measurement is exponentially getting better and better. It's just so far the exponent isn't as fast as with computer software and AI, right? But it's coming along."
},
{
"end_time": 2208.166,
"index": 89,
"start_time": 2184.138,
"text": " Even without better brain simulations, I think we could be doing a lot better. I mean, no one is devoting humongous server farms to, you know, huge nonlinear dynamics simulations of all the different parts of the brain using, say, Izhekevich neurons and chaotic neural networks. I mean, if the amount of resources one big tech company puts into transformers,"
},
{
"end_time": 2235.418,
"index": 90,
"start_time": 2208.473,
"text": " We're put into making like large-scale nonlinear dynamics simulations of brain based on detailed biology knowledge. I mean we would gain a lot. We would learn a lot beyond where we are now. We still don't have data on astrocytes and glia and a lot of neurochemistry, right? So we're still missing a lot. Interesting to think about the strengths and weaknesses of that approach though. I mean one weakness is we don't have the data."
},
{
"end_time": 2260.691,
"index": 91,
"start_time": 2235.998,
"text": " Another weakness would be once you have it, all you have is another human in the computer. We've already got billions and billions of irritating humans, right? I mean, granted, that's a human where you can probe everything that happens in their digital brain, right? So then you can learn a lot from it. But the human brain is not designed according to modern software engineering principles or hardware engineering principles, right? For better and worse."
},
{
"end_time": 2284.445,
"index": 92,
"start_time": 2261.544,
"text": " Short-term memory seven plus or minus two. What if you want to jack that up a bit like there's not Probably not a straightforward way to do that. That's probably wrapped up in weird nonlinear dynamic feedbacks between you know, the the hippocampus cortex that thalamus and so and so forth that you were we're not designed to be modded and upgraded in a flexible way and"
},
{
"end_time": 2306.135,
"index": 93,
"start_time": 2284.701,
"text": " We do have some interesting adaptive abilities, like if you graft a weird new sense organ into the brain, the brain will often adapt to being able to sense from it. But there's weaknesses, and then there's potential ethical weaknesses also. I mean, the maxim that absolute power corrupts absolutely"
},
{
"end_time": 2336.937,
"index": 94,
"start_time": 2307.056,
"text": " was this was a sort of partial truth formulated by observing humans. It's not necessarily a truth about all possible minds. But if you're making a human in a computer, if you do find a way to jack up its intelligence, then maybe you're creating like a horrible science-fictional anti-hero, like this human who lives in a computer and is smarter than everyone else. But no, it will never really have a human body. I mean, we can see how the movie ends. But that's anyway one possible route."
},
{
"end_time": 2367.449,
"index": 95,
"start_time": 2337.654,
"text": " Another possible route, which is very interesting to me and will be a lot of fun, but is not something I'm putting a lot of time into right now, is a more artificial life type approach. I mean the field of A-Life had a peak in the 90s or something. You were trying to make these sort of ecosystems of artificial organisms that would then evolve smarter and smarter little creatures. Didn't go as well as it wanted, but of course"
},
{
"end_time": 2387.398,
"index": 96,
"start_time": 2368.063,
"text": " You know, when I was teaching neural nets at University of Western Australia in the 90s, it took three hours to run a 30 neuron network with their current back prop and everyone was bitching that neural nets are bad because they're too slow and will always be too slow, right? So it could be that what happened with artificial, with neural nets can also happen with a life, right? I mean, it could be"
},
{
"end_time": 2398.968,
"index": 97,
"start_time": 2387.944,
"text": " Just scale. Certainly the ecosystem has a lot of scale, right? And what you find is when you have more scale, you can screw around with the details more and find out what works."
},
{
"end_time": 2426.169,
"index": 98,
"start_time": 2399.531,
"text": " Seemed like an artificial life, never found quite the right artificial chemistry to underlie the artificial biology. Not that many things were tried. A guy named Walter Fontana had a cool system called algorithmic chemistry in the 90s and then early aughts where he was taking little Lisp programs and just made a big soup in which Lisp codelets would rewrite other Lisp codelets and just trying to get autocatalytic networks to emerge out of that."
},
{
"end_time": 2452.739,
"index": 99,
"start_time": 2426.834,
"text": " didn't go that well but I mean the amount of computational firepower being leveraged there was was very very small right so it seems seems like I mean there's an argument against it which is like it took billions of years for life to emerge on earth with a very large number of molecules involved with doing randomish sort of things on the other hand"
},
{
"end_time": 2476.886,
"index": 100,
"start_time": 2453.456,
"text": " We can take leaps, we can watch experiments, we can fine-tune things as a human, like more aggressively than the Holy Creator appears to have done with evolution on Earth. Again, this is something that gets very little attention or resources now, but it would be really interesting to see what"
},
{
"end_time": 2505.759,
"index": 101,
"start_time": 2477.602,
"text": " like a Google scale experimentation in artificial life would lead. There's not an obvious commercial upside to early stages of that sort of research as compared to the question answering systems or something. I have some further ideas on how to accelerate artificial life, but I'll mention that at the end because they involve my third plausible route to create AGI systems."
},
{
"end_time": 2531.92,
"index": 102,
"start_time": 2506.459,
"text": " which is what I'm actually working on now and I'll just give a few minutes so that I've given a lot of a lot of talks on it before which you can you can you can find online. So in terms of name brand systems that would be AGI systems I'm working on now is called OpenCog Hyperon which is a new version of the OpenCog system. So we had a system called OpenCog"
},
{
"end_time": 2560.879,
"index": 103,
"start_time": 2532.824,
"text": " launched in 2008 based on some older code before that. Now we're making it pretty much from the ground of rewrite of that called Hyperon, but the ideas underlying it could be levered outside that particular name branded system and one way to look at this is that we're hybridizing neuro, symbolic and evolutionary systems."
},
{
"end_time": 2590.111,
"index": 104,
"start_time": 2561.305,
"text": " but not necessarily old fashioned sort of crisp predicate logic. I mean, for those who are into wacky logic systems, it's probabilistic, fuzzy, intuitionistic, paraconsistent logic. So it's a which sort of means probabilistic and fuzzy, probably you know what they mean. Paraconsistent means you can hold two inconsistent thoughts in its head at one time without going going ape shit. Intuitionistic"
},
{
"end_time": 2615.981,
"index": 105,
"start_time": 2590.538,
"text": " Pretty much means it builds up all its concepts from experience and observation. So logic theorem prover, right? So we're trying to do with symbolic stuff by actual logic theorem proving. We're using neural nets for recognizing patterns in large volumes of data and synthesizing patterns from that, which they have obviously shown themselves to be quite good at."
},
{
"end_time": 2645.998,
"index": 106,
"start_time": 2616.544,
"text": " We're using evolutionary systems and genetic programming type systems for creativity because I think mutation and crossover are a good paradigm for generating stuff that leverages what was known before but also goes beyond it. But again it depends on what is the level of representation at which you're doing the mutating and crossing over. So we're integrating neural"
},
{
"end_time": 2664.599,
"index": 107,
"start_time": 2646.442,
"text": " symbolic and evolutionary methods, not by saying, okay, neural is in the box, symbolic is in the box, evolutionary is in the box, and then the boxes are communicating across these channels. What we're doing, we're making this large distributed knowledge metagraph. A metagraph is like a graph"
},
{
"end_time": 2687.329,
"index": 108,
"start_time": 2665.162,
"text": " But you can have links that span more than two nodes, like three, four, five, or a hundred nodes. And you can have graphs pulling the whole sub-graphs. So a hypergraph is a graph which has n-ary as well as binary links. A metagraph goes beyond. You can have links pulling the links or links pulling the general sub-graphs. So we have a distributed knowledge metagraph. There's an in-ram version of the knowledge metagraph also."
},
{
"end_time": 2712.108,
"index": 109,
"start_time": 2688.148,
"text": " We represent neural nets, logic engines, and evolutionary learning inside the same distributed knowledge metagraph. So in a sense, you just have this big graph, parts of it represent static knowledge, part represent active programs. The active parts run by transforming the graph, and the graph represents the intermediate memory of the algorithms also. So you have this big self-modifying, self-rewriting, self-evolving graph."
},
{
"end_time": 2740.538,
"index": 110,
"start_time": 2712.108,
"text": " And the initial state of that graph is that some of it represents neural nets, some of it represents symbolic logic algorithms, some of it represents evolutionary programming, some of it just represents whole bunches of knowledge which could be fed in from databases, they could be fed in by knowledge extraction from large language models or that they could be fed in from pattern recognition on sense perception, right? And to go deeper than this into what we're doing with hyperon"
},
{
"end_time": 2757.432,
"index": 111,
"start_time": 2740.862,
"text": " Involves more math than I could go into here, especially without the presentation or anything. But if you look at it, there's a paper I wrote and posted on ArcCyber a couple years ago called The General Theory of General Intelligence. And what I go into there is how you take"
},
{
"end_time": 2786.732,
"index": 112,
"start_time": 2758.302,
"text": " Neural learning, probabilistic programming, evolutionary learning, logic theorem proving, you represent them all in a common way using a sort of math called Galois connections. So I use Galois connections to boil these AI algorithms all down to fold and unfold operations over metagraphs. So that's probably gibberish to anyone without some visibility into the functional programming theory literature, but I guess the takeaway from that is"
},
{
"end_time": 2813.541,
"index": 113,
"start_time": 2787.466,
"text": " We're trying to use advanced math to represent neural symbolic and evolutionary learning as separate views into common underlying mathematical structures so that they're all kind of different aspects of the same meta algorithm rather than different things living in separate boxes. Now there's a connection between this and"
},
{
"end_time": 2840.009,
"index": 114,
"start_time": 2813.814,
"text": " the artificial life approach to AGI, which I would love to approach at some point. And the connection is, if you were brewing a bunch of artificial life populations on many different machines around the world, wouldn't it be interesting to shortcut evolution and train a smart machine learning system to predict which artificial life populations had promise and kill the ones that didn't early, right?"
},
{
"end_time": 2868.285,
"index": 115,
"start_time": 2840.486,
"text": " You couldn't do that too aggressively or you're going to kill the hopeful monsters, right? But you could certainly identify a lot of things that just aren't promising and identify something early as really promising and make multiple clones of it, right? So the idea of a narrow AI and then eventually AGI like evolution master to help"
},
{
"end_time": 2895.35,
"index": 116,
"start_time": 2869.36,
"text": " brew the artificial life soup seems really really quite interesting to me and maybe could shortcut past the like four billion years of however many billion years life has been evolving on on on earth problem right so i think i mean of course there's also ways more and more advanced ai can help with the neuroscience approach to agi also i mean there's no doubt"
},
{
"end_time": 2923.78,
"index": 117,
"start_time": 2895.981,
"text": " I mean, machine learning is already all over neuroscience, so there's no doubt that steps toward AGI could help with inferring things about how the brain works from available neuroscience data. I still think you may fundamentally need more data than we have now. So those three approaches, I think, are all promising and could work. And finally,"
},
{
"end_time": 2952.91,
"index": 118,
"start_time": 2924.36,
"text": " I want to briefly note the role of hardware in all this, just for a couple minutes, because that's sort of what ended up bringing me here to Florida right now, actually, was the hardware side of things. So if you look at, you know, what caused neural nets to transition the way that they did? I mean, we were all doing neural nets for decades. They were slow, they were conceptually intriguing, but they weren't doing incredibly, incredibly amazing things."
},
{
"end_time": 2974.838,
"index": 119,
"start_time": 2953.49,
"text": " The reason they took off so much is pretty much porn and video games, right? I mean, it's because GPUs became so popular and the GPUs do matrix multiplication really fast and they plug them into regular PCs. They do matrix multiplication across many processors concurrently. But lo and behold,"
},
{
"end_time": 3000.128,
"index": 120,
"start_time": 2975.196,
"text": " Matrix multiplication is also what you need for running many simulations in areas of science, and it's also what you need for running neural nets quickly. So it turned out that these GPU cards, which were created for video games and video rendering, these turned out to be the secret sauce for scaling up"
},
{
"end_time": 3023.217,
"index": 121,
"start_time": 3001.067,
"text": " Neural nets just so they could run faster and I mean in 1990 when I was a professor at University of Nevada Las Vegas we had a 10 million dollar Cray YMP supercomputer. It could do 1,000 things at a time which was so much for them 10 million dollars. Remember we programmed it in sort of parallel Fortran and I mean now"
},
{
"end_time": 3050.043,
"index": 122,
"start_time": 3024.036,
"text": " GPU, of course garden variety GPU can do more than a thousand things at a time and each of those things is done much faster than the Cray did. So we were playing then with neural nets on this supercomputer. We saw what it could do, but now I mean you have multi GPU servers and racks and racks and racks in them, right? So clearly the hardware innovation, it didn't"
},
{
"end_time": 3073.66,
"index": 123,
"start_time": 3050.538,
"text": " Exactly let you take the code we were running in the 80s and 90s and make it work better, but it let you Experiment with that code see what work what didn't work tweak it tweak it tweak it with fast experimentation Find something conceptually fairly similar that does amazing stuff and so one one question is what hardware would let you"
},
{
"end_time": 3083.677,
"index": 124,
"start_time": 3074.548,
"text": " pursue these other three approaches to AI that I outlined way, way better than has been done historically and for brain simulation."
},
{
"end_time": 3109.394,
"index": 125,
"start_time": 3083.968,
"text": " I think it's clear what you need are actual neuromorphic chips, right? I mean, most of what are called neuromorphic chips are not so much, but you can, I mean, you can take Isaac Kevich's chaotic neuron and put it on a chip and there's some research papers on it, though it's not being done at scale. I mean, you could take glia and astrocytes and put analysis about them on the chips. I mean, you could, you could try really hard to make an actual neuromorphic chip to drive large scale brain stimulation on the,"
},
{
"end_time": 3134.667,
"index": 126,
"start_time": 3109.718,
"text": " On the side of hybrid architectures, I'm actually working on a novel AGI board together with Rachel St. Clair, who introduced me up here, who's a postdoc here and who invited me to come speak here. So Rachel had designed this hypervector chip, which puts on hardware very fast manipulations of very high dimensional bit vectors."
},
{
"end_time": 3163.336,
"index": 127,
"start_time": 3135.162,
"text": " which gives faster ways to implement neural nets, but also faster ways to do various things with logic engines. I developed a chip that allows you to do pattern matching on graphs very fast by putting the graph on hardware. So we figured we can put our hypervector chip, my graph pattern matching chip, deep learning GPU and the CPU, put them on the same board, connect them with very modern fast processor to processor interconnect. Now, maybe if you do that,"
},
{
"end_time": 3191.34,
"index": 128,
"start_time": 3164.002,
"text": " you'll have a board that does for this sort of hybrid neuro symbolic evolutionary system, something similar to what GPUs did for neural nets, at least it's a plausible hypothesis. So we're going through the simulation process and looking for manufacturers and so forth. But again, that's both a real project, which I think is cool, done through Rachel's company Simuli, and it's a sort of"
},
{
"end_time": 3219.275,
"index": 129,
"start_time": 3191.749,
"text": " just case in point right like we should see we should see a flourishing of more diverse sorts of hardware that bake diverse sorts of AI processing into the hardware and that that's as important as experimenting on the software because we can I mean we can see historically it's a lot of what led us where we are with with neural nets today. So yeah to briefly wrap up I mean it's a"
},
{
"end_time": 3246.869,
"index": 130,
"start_time": 3219.872,
"text": " Super exciting point in the history of AI. We have systems that do more human-like stuff than ever before. I think they're not AGI's and cognitive science thinking is very useful for understanding the ways in which they're not intelligent like humans are. On the other hand, I think many of the same underlying technologies are going to be useful for"
},
{
"end_time": 3277.176,
"index": 131,
"start_time": 3247.432,
"text": " for building actual AGI. So while I don't think the chat GBT type systems are on the direct path, I mean, I think they're indirect evidence that we are probably not that far off from AGI. So I think I agree with Sam Altman. We could be at human level AGI in five years or something from now. I also won't be shocked if it's 15 years. I'll be shocked if it's 50 years. And what's cool is"
},
{
"end_time": 3306.766,
"index": 132,
"start_time": 3279.172,
"text": " How uncontroversial these statements are in so many circles are now, right? It's cool and it's scary, but certainly an exciting time to be doing this sort of research. So if you want to find out more about all this, there's my website, gertzel.org has links to a lot of things I'm doing. The website of my company singularitynet.io"
},
{
"end_time": 3328.524,
"index": 133,
"start_time": 3307.312,
"text": " As well as telling about our blockchain-based platform for running AI decentralized across a global network with no central controller, which I think is critical to the ethical rollout of AGI, but I didn't even have time to get into today. And now we all have to go to the beach and have a barbecue."
},
{
"end_time": 3363.148,
"index": 134,
"start_time": 3334.343,
"text": " That was fascinating. Thank you so much, Ben. All right, so questions. So yes. Sometimes we put AGI as a high bar of what we're trying to achieve, but it's probably going to be pretty uneven. So in what ways will it exceed human intelligence? What is the likely scenarios? And will those areas be identifiable by humans? Well, I think that within"
},
{
"end_time": 3392.841,
"index": 135,
"start_time": 3364.258,
"text": " Let's say a couple years just to throw a concrete number out there. I think within a couple years of getting a true human level AGI, we will have massively superhuman AGI that will exceed humans in essentially every respective intelligence. So I mean I think once we have an AGI that can do computer science and math and computer programming, that can do the stuff the people in this room can do,"
},
{
"end_time": 3420.742,
"index": 136,
"start_time": 3393.592,
"text": " I see no reason it couldn't upgrade its code base and improve the algorithms underlying itself to make itself, say, 1.2 times as smart as it was initially. And then you lather, rinse, repeat. And this gentleman here wrote a paper on this some years ago. So in which ways the first AGI will exceed people is not obvious and could depend on what route you take, right?"
},
{
"end_time": 3444.445,
"index": 137,
"start_time": 3421.305,
"text": " If it came out of an approach with a symbolic logic engine in it, it's going to be way better at reasoning than people are. If it came out of a brain simulation, then it might not be better at reasoning than people are. But you can still feed more sensors into it than you can feed into a single human brain, so we'd get some added understanding that way. But no matter how you get there,"
},
{
"end_time": 3474.258,
"index": 138,
"start_time": 3444.718,
"text": " I think there's a recursive improvement loop you'll enter into, particularly when you consider you can make a large number of copies of this system, right? Like you have one smart human, okay, well then within reasonable amount of cost you have a hundred, maybe a thousand smart humans, but they can do direct brain-to-brain sharing of knowledge, right? So it's pretty easy to see how you get that recursive self-improvement. I mean you can't rule out there being some limit, but it seems"
},
{
"end_time": 3491.374,
"index": 139,
"start_time": 3475.145,
"text": " really outlandish that like there's a fundamental limit and only 1.5 times human intelligence to me that's like saying you'll never make something run more than one and a half times as fast as a cheetah or something that doesn't feel right."
},
{
"end_time": 3517.551,
"index": 140,
"start_time": 3491.988,
"text": " I finally get to ask one question. I'm just, I'm curious because, you know, we think that AIs do have this recursive self-improvement capability, but when we're thinking about AI as a distributed environment with an ecosystem of different AI services and large language models and, you know, all kinds of entities controlled by who knows what, right? Certainly not aligned organizations. Why think that"
},
{
"end_time": 3546.22,
"index": 141,
"start_time": 3518.541,
"text": " The future brings this, you know, improvement in the intelligence level. Why not think of the future more in terms of what's been happening in bad scenarios with the amplification of discontent on Facebook, for example? Well, I think the recursive self-improvement at one level, it happened, I mean, in one sense, it happens on a different level than that, right? So when you can think about a large knowledge metagraph,"
},
{
"end_time": 3566.732,
"index": 142,
"start_time": 3546.732,
"text": " Like we have in OpenCog Hyperon and we have our own programming language which is called Meta, M-E-T-T-A, Meta Type Talk is the acronym. So we have our Meta language which basically interprets directly into nodes and links. And actually to model the semantics of that"
},
{
"end_time": 3582.073,
"index": 143,
"start_time": 3567.09,
"text": " We use the mathematics of the infinity groupoid from category three, which is equivalent to Wolfram's Ruliat that he was talking about. So I thought that was that was interesting that the same he uses this Ruliat structure built of all these hypergraphs."
},
{
"end_time": 3607.705,
"index": 144,
"start_time": 3582.739,
"text": " The Ruliat is basically the infinity groupoid from category theory, although Ruliat is a wizzier name, and we use metagraphs which are like hypergraphs with a few extra features. So actually the self-rewriting, self-organizing data structure we're using in Hyperon is highly similar to this self-rewriting data structure he's using to model fundamental physics, although the"
},
{
"end_time": 3636.681,
"index": 145,
"start_time": 3608.422,
"text": " The statistics of the networks you see for modeling particles and objects are different than the statistics of networks you see if you're trying to model common sense knowledge, but there's not a contradiction. Those could be structures on different levels of the same network. So I mean at that level, the ability to self modify and self organize would like occur within the distributed network mind of a single open cog system or something. Now if you're talking about"
},
{
"end_time": 3664.309,
"index": 146,
"start_time": 3638.251,
"text": " across the whole planet, I mean, then you're basically, you're looking at two different scenarios before or after the AGI takes over the world, right? I mean, before the AGI takes over the world, you probably have a highly splintered off scenario, like right now where China is building its own networks, the US is building its own networks."
},
{
"end_time": 3691.357,
"index": 147,
"start_time": 3664.838,
"text": " Rasha was trying to before they got distracted murdering people. On the other hand, what we're trying to do with SingularityNet is make it open and decentralized infrastructure for deploying AI. You think of things like the Internet or Linux, which are everywhere with no central controller. If the first AGI is rolled out like that,"
},
{
"end_time": 3717.773,
"index": 148,
"start_time": 3691.869,
"text": " It becomes like BitTorrent or something without the illegal copyright aspect. It becomes like it's all over the place. It's running on machines all over the place with different nodes and no one country has a monopoly of it. No one can stop it. But again, in the transitional phase, while we make the transition from narrow AI to AGI and then from the first inklings of AGI to"
},
{
"end_time": 3743.626,
"index": 149,
"start_time": 3718.541,
"text": " full-on super AGI, what happens to human society during that transitional phase is a very nasty and difficult question. What happens when 90% of human jobs are obsolete, but the super AGI hasn't yet created a molecular nano-assembler to airdrop into everyone's farm? Then the developed countries will give universal basic income to everyone."
},
{
"end_time": 3768.985,
"index": 150,
"start_time": 3744.343,
"text": " and Africa will remain subsistence farmers with no work outsourced to them, then their kids who can computer hack will hack into the power grid in the West and wreak a lot of havoc. I think there can be quite difficult scenarios in the interim, yet I'm an optimist on the whole in that I think once you have an AGI that's several times human level intelligence,"
},
{
"end_time": 3790.725,
"index": 151,
"start_time": 3769.633,
"text": " Then it can just cut through all this. Then it's much smarter than we are. It can build its own robot factories to build new robot factories to create smarter AGI's. And paperclip factories maybe, right? Well, humans become like the squirrels in the national park, right?"
},
{
"end_time": 3818.814,
"index": 152,
"start_time": 3791.442,
"text": " I mean, they carry out their own love lives, they hunt, they fight, they build stuff, and the rangers don't try to interfere with their social lives, right? It's going to be fun talking to you more, Ben. I think we're on the same wavelength. Okay, so there were some earlier questions, starting with Garrett, and then Carla. So actually, I have a question, because you did bring it up a little bit, and I could talk a little bit about this, but I just want to ask about this, because you brought up A-Life,"
},
{
"end_time": 3846.357,
"index": 153,
"start_time": 3819.565,
"text": " I don't think we fundamentally need different hardware to get to human level AGI. I mean, unless"
},
{
"end_time": 3876.8,
"index": 154,
"start_time": 3847.244,
"text": " Unless we're all wrong that classical computing is good enough and you really need quantum computing, which I don't see evidence for, but I can't give it a zero probability. But I mean, by and large, from what we see in the brain and what we see with AI systems out there now, you don't need radically different hardware. But by the same token, you don't need GPUs to do neural nets either, right? You could do it all on CPUs. It just costs more. The thing is, a couple orders of magnitude extra cost and extra power consumption"
},
{
"end_time": 3882.295,
"index": 155,
"start_time": 3877.824,
"text": " Is this sort of practical obstacle that can delay something by decades?"
},
{
"end_time": 3908.763,
"index": 156,
"start_time": 3882.961,
"text": " But, I mean, in the scope of history, delaying by decades doesn't matter either, right? That's kind of gets to one of my questions to go to, right, which is there's a difference between achieving the goal of AGI with the hardware, despite the outrageous energy cost of mining the core of the planet, to realize this kind of, you know, whatever magnitude order of intelligence greater system."
},
{
"end_time": 3935.845,
"index": 157,
"start_time": 3908.763,
"text": " It seems like with the hardware Rachel and I are working on, you can speed up the operations of systems like OpenCog or biologically realistic neural nets by at least like a couple orders of magnitude. So if you can speed things up by between a hundred and a thousand times, very helpful. So if you think about say a GPG-3 model cost"
},
{
"end_time": 3963.865,
"index": 158,
"start_time": 3936.203,
"text": " $5 million, $10 million to train. Well, if you didn't have GPUs, let's just for sake of argument, say it took 100 times longer, right? I mean, so then instead of 10 million, it's a billion dollars. But I mean, there's companies, these companies have a billion dollars, right? So I mean, and, and now, now open their eyes getting $29 billion, right? But the thing is, no one wanted to give them $29 billion before they spent the 10 million, right? So it just, it's the higher cost,"
},
{
"end_time": 3981.8,
"index": 159,
"start_time": 3964.343,
"text": " We'll slow things down and making chips that can speed things up by a hundred or five hundred times. I mean, obviously we'll save time off it, but I don't think it's a really fundamental necessity."
},
{
"end_time": 3994.957,
"index": 160,
"start_time": 3982.108,
"text": " If quantum computing were needed, that would be more like a fundamental necessity. I mean, you could of course simulate the Schrodinger equation on a classical computer, but then you're getting into like many, many, many orders of magnitude slowdown that becomes infeasible."
},
{
"end_time": 4018.012,
"index": 161,
"start_time": 3997.193,
"text": " Yeah, thank you. So there are three alternatives towards AGI. One of them is one in which artificial general intelligence emerges from simulating the brain in a kind of individualistic, isolated manner. And the other ones are more like"
},
{
"end_time": 4037.824,
"index": 162,
"start_time": 4018.012,
"text": " My question is the role of social intelligence in the development of our general intelligence."
},
{
"end_time": 4054.036,
"index": 163,
"start_time": 4038.251,
"text": " independent of the"
},
{
"end_time": 4082.807,
"index": 164,
"start_time": 4054.565,
"text": " tribe and put them in robots or in virtual characters in a game world and let them let them buzz around and do things and certainly with with open cog systems I didn't go into this but we're we're looking at exactly that we're looking at using open cog hyper on systems to control humanoid agents in a 3d virtual world and I want to get them to collaborate to build stuff and what one experiment I'm very interested in is seeing if you can get a collection of open cog agents"
},
{
"end_time": 4112.312,
"index": 165,
"start_time": 4083.217,
"text": " to invent their own language to communicate with each other about about about building stuff right so I mean there's you could do social intelligence things really in in any of these paradigms I think a difference is in the sort of neural symbolic evolutionary approach there's more ways to cheat by injecting knowledge into the system's brain like you can inject databases and we we're working on ways to take"
},
{
"end_time": 4134.957,
"index": 166,
"start_time": 4112.807,
"text": " All the knowledge from a large language model, like a GPT-4, turn it all into huge predicate logic statements. They can then be fed into a logic engine. If I had a trillion predicate logic statements, which are useful knowledge, I don't know how to feed them into a brain simulation or an A-life system. I do know how to feed them into an open cog system."
},
{
"end_time": 4149.155,
"index": 167,
"start_time": 4135.623,
"text": " That's a difference. How easily can you cheat and inject knowledge? But social intelligence, I think you can do in any of these paradigms and may be a critical thing to do. I mean, we want to experiment with it."
},
{
"end_time": 4171.169,
"index": 168,
"start_time": 4149.48,
"text": " I think Misha had a question. Yes, I just have a quick question. You just mentioned the having different agents and I wanted to hear more about the importance maybe to you whether you consider that important or not of agency and agent in environment in the context of intelligence. I think it's important to human-like intelligence"
},
{
"end_time": 4197.329,
"index": 169,
"start_time": 4171.732,
"text": " that we evolved to control these bodies which are solid objects that move around in an environment comprised largely of other solid objects and that we're sharing this environment with other similar looking objects that seem to do kind of the same thing things as us and that we we need to do things collaboratively with them. I mean clearly if you want to ask like what's the"
},
{
"end_time": 4225.896,
"index": 170,
"start_time": 4197.5,
"text": " prior distribution over observations and actions, characterizing human intelligence, a lot of it has to do with embodied communication, like shared communication with others with similar bodies in a similar world. Whether that's fundamental to AGI in general, perhaps not, but it's pretty fundamental to human-like general intelligence. Parker had a question."
},
{
"end_time": 4256.954,
"index": 171,
"start_time": 4228.148,
"text": " This is awesome. Thanks. And I'm sure you've talked about this in your other talks. Can you speak to the ratio between the neural net, symbolic and evolutionary? Is one more important than the others? Is it a split or is that proprietary that you can't talk about? I mean, I think in the end, any of these three paradigms could build a very powerful AGI with enough resources."
},
{
"end_time": 4285.555,
"index": 172,
"start_time": 4257.398,
"text": " On the other hand, with 50 lines of Lisp code, you can implement what Marcus Hutter called AIXITL, which could have arbitrarily high intelligence using enough resources. So that's not that strong a statement. I think, yeah, they're each important for different things. And I think you can build a variety of different systems that weights each one more highly if you wanted to, actually. So you could make a system that was mostly a transformer neural net,"
},
{
"end_time": 4312.363,
"index": 173,
"start_time": 4286.084,
"text": " with just adding little bits of symbolic reasoning on to stop the Transformer neural net from being too inconsistent with itself or you can make a system that's mostly a symbolic reasoning engine and just outsources low-level pattern recognition and synthesis to a neural net and these would just be different flavors of minds that you know they might differ more than"
},
{
"end_time": 4342.619,
"index": 174,
"start_time": 4313.592,
"text": " autistic person from an average person, but there'd be different variations of the same mind architecture. As I'm thinking of it by default, what I'm doing now, this symbolic probabilistic reasoning engine is sort of the core that everything feeds into and we're extracting knowledge from large language models and then we're using neural nets for perception and action. We're using evolutionary learning"
},
{
"end_time": 4361.886,
"index": 175,
"start_time": 4343.353,
"text": " to come up with new ideas that are then validated or rejected by the by the reasoning engine. But on the other hand, the software plumbing could sort of be used to make a variety of different systems that weight different components more highly so that I think that will in large measure be"
},
{
"end_time": 4382.125,
"index": 176,
"start_time": 4362.739,
"text": " an evolutionary, an experimental question, right? But it's also, it's a bit like asking how important is the attention mechanism inside the transformer, right? Because, I mean, it's there. A lot of other things are there too, and they all have to work together. So, Elon had a question."
},
{
"end_time": 4404.462,
"index": 177,
"start_time": 4383.251,
"text": " Do we have a definition of AGI, right? What do we mean by AGI? Do you have a working definition? It sounds like you have a sort of complementary, which many people do. You know it when you don't see it, right? In some sense you haven't seen that thing, that creative spark, for example. I mean, I think I talked about this in the beginning of the talk. I think there are definitions of"
},
{
"end_time": 4425.538,
"index": 178,
"start_time": 4405.555,
"text": " AGI in a broad sense, and Marcus Hutter got the ball rolling on that in his book Universal AI. I mean, Shane Legge's PhD thesis, Shane Legge went on to found Google DeepMind, his PhD thesis gives an algorithmic information theory based definition of general intelligence."
},
{
"end_time": 4455.947,
"index": 179,
"start_time": 4426.015,
"text": " So that's there. So then what we mean by a working definition, I guess, is a definition of human-level AGI. And there isn't a really crisp definition there. And I think that makes sense because it's, I mean, biology and psychology are not so crisp and elegant, right? Like human-level intelligence is just whatever humans happen to evolve about"
},
{
"end_time": 4476.852,
"index": 180,
"start_time": 4456.698,
"text": " It's just whatever we happen to evolve to do by now. I think one thing that chat GPT highlights is that being able to do like 80% of what 80% of people do, like most of the time, still"
},
{
"end_time": 4504.599,
"index": 181,
"start_time": 4477.654,
"text": " is radically not the same thing as fundamental human level, human level general intelligence. And this is why I said, well, what about the five year turing test, right? Like if I, if I, which kind of ties in with the robot college test, right? So I think, I think what that what that's leaning toward is you would want a system to be able to, over a multi year period,"
},
{
"end_time": 4531.92,
"index": 182,
"start_time": 4505.708,
"text": " grow its knowledge from starting point to new knowledge at the end of that multi-year period, at least as well as a human can. And that's different than just having the specific capabilities of a human at any one point in time. So that's something I would definitely look at in practice. If we had a system with a certain level of knowledge and intelligence,"
},
{
"end_time": 4560.794,
"index": 183,
"start_time": 4532.295,
"text": " And we didn't upgrade the software, right, but we interacted with and taught the system things, led experiment with things, and it gained vast amount of new capabilities through its own experimentation with the world, including gaining new domains of knowledge, right, like a person does. That would certainly impress me, and that's something that every little kid does, and certainly no"
},
{
"end_time": 4587.517,
"index": 184,
"start_time": 4561.715,
"text": " Fine-tuning, I mean that's exactly what it's doing when you give it new information It learns that new information and corporates it's able to do It is not at all gaining fundamentally new skills and won't represent in its training database. No You can learn to go logic system. I can teach a new predicates new functions. It depends what you mean by a fundamentally different skill Yeah, I mean if I mean if there was no for example a young child a"
},
{
"end_time": 4615.486,
"index": 185,
"start_time": 4588.2,
"text": " who has never played any musical instrument. You can teach to play a musical instrument, which is different than someone who has mastered ten musical instruments, learns to master the eleventh musical instrument. There's certainly a difference there. It can learn a new language, but it knows a lot of languages already, so that's not"
},
{
"end_time": 4635.282,
"index": 186,
"start_time": 4616.408,
"text": " I think now what we need to do is thank Ben very much and thank all of the speakers for today. That was a really interesting day."
},
{
"end_time": 4664.36,
"index": 187,
"start_time": 4635.469,
"text": " All right. Well, after immersing yourself in this encouraging and somber lecture from the Center for the Future Mind, you may be eager to explore more of the astounding and troublesome developments in artificial intelligence. To satiate your curiosity, I invite you to browse through the accompanying playlist, which not only offers a deeper insight into the implications of these breakthroughs, but also sheds light on measured aims at regulating their continued growth. Subscribing allows you to be privy to the upcoming panel discussions exploring non-human consciousness in babies and animals,"
},
{
"end_time": 4688.507,
"index": 188,
"start_time": 4664.36,
"text": " Slated to debut approximately one week from now. Take care New announcement the patreon as well as theories of everything org the membership gives you access to personally curated detailed summaries of specific episodes the most recent Steven Wolfram lecture on chat GPT as well as this Ben Gortzel episode replete with references to each book mentioned theorems when they come up at play by play bullet points of conclusions and statements by the guests"
},
{
"end_time": 4709.206,
"index": 189,
"start_time": 4688.507,
"text": " Only select episodes will have this feature, so you'll be able to vote on which episode you want most. Because this takes a considerable amount of time, this is my minor way of saying thank you for supporting the Toe Podcast by giving you something edifying to read along or review afterward. Again, that's by signing up at patreon.com slash Kurtjmungle. Or if you don't like that website, then there's theoriesofeverything.org."
}
]
}
No transcript available.