Audio Player
Starting at:
Elan Barenholtz, William Hahn, Curt Jaimungal (Me): How AI Healthcare Will Change the World
April 7, 2025
•
1:26:53
•
undefined
Audio:
Download MP3
⚠️ Timestamps are hidden: Some podcast MP3s have dynamically injected ads which can shift timestamps. Show timestamps for troubleshooting.
Transcript
Enhanced with Timestamps
209 sentences
13,147 words
Method: api-polled
Transcription time: 84m 47s
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region.
I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines.
As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount.
If you imagine a VIP in the government and something happens to them, literally hundreds of doctors are gonna get put on call. None of us can afford that. But if we look at this technology, it'll be very reasonable to think that you have the brain power of a thousand physicians. What is this gonna look like when you have the whole chat GPT to yourself? It's only a matter of time where you can afford to have a thousand agents all working for you.
What if every person on Earth could have access to a personal team of medical specialists, not just one doctor, but hundreds working in concert available 24-7 and at virtually no cost? In this episode, we explore how AI is revolutionizing healthcare through what experts call medical swarms, so armies of specialized agents that could make concierge medicine available to everyone.
Joining us are a distinguished panel of experts, AI Professor William Hahn, who's the founder of FAU's Machine Perception and Cognitive Robotics Lab and returning guest of this channel. Here he is discussing Wolfram and Consciousness. Link in the description. There's also Dr. Gil Blander, an MIT Aging Research veteran and founder of Insight Tracker. There's also Dr. Dan Elton,
an NIH scientist developing AI for medical imaging and AI professor Elon Berenholtz, founder of Florida Atlantic University's Machine Perception Cognitive Robotics Lab. My name is Kurt Jaimungal, and I'm honored to have been invited to moderate today's Polymath Medical Salon at FAU's Gruber AI Sandbox.
Keynoted by pioneer biologist Michael Levin, here's his presentation from Polymath, link in the description. Curated by Adi Cha of Ecolopto, they held the first Longevity Hackathon at MIT with Augmentation Lab featuring Toe guests like Stephen Wolfram, Joschabach, Manolis Kellis. And this was also curated by academic and medical philanthropist Ruben Gruber.
Good evening, everybody.
We're really excited that you guys are here for this kickoff event. We're here in the Machine Perception Cognitive Robotics Laboratory, something I started with Elon here about 10 years ago. And then we expanded into this beautiful space we call the AI Sandbox. So first and foremost, he's not going to like it, but we have to thank Ruben Gruber back here. Give it up for Ruben. He made this all possible.
It's not just for building us this beautiful space that enables events like this, but in this particular event for serving as a catalyst for what we think is a very exciting space that's combining the improvements we've seen with artificial intelligence with some of the oldest goals in the world, namely medicine and longevity.
And we believe that this is going to be one of the killer or rather life saving applications for artificial intelligence. So I'm very excited here to share. We're going to have a nice dialogue with the panel, but we want your input. The whole point is to create a conversation factory and to have a loop that's connecting the practitioners, the physicians, the people who need care, the people who give care, and all of the amazing young people here that are going to go out and use these tools
to make those things happen. Go ahead for Dr. Han. Thank you. You may have seen me running around practically like a chicken without a head for basically the whole event. My name is Adi Shah. I run this thing called Ecolopto, which is Greek for incubate if I got Google translate correctly. And I basically run a research institute and this research institute is kind of a
Somewhat of a sloppy excuse for me to be able to better understand why we're here on this planet. What is purpose? Why are we here? What are the unknowns, unknowns of biology, math, physics, right? All these types of things. But if I had to sum up Echolopto in one word, beyond just doing hackathons, research hackathons and salons like these and, and you know, all sorts of things, right, is that I want to be able to
Hello, my name is Gil Blander.
Can with the idea of the company that I want to start the company name is inside trucker and what we're trying to do is helping human to live better longer based on what's happening inside the body.
So we are looking at a lot of variable we're looking at around 50 blood biomarkers, DNA, data from wearables, we even added the food recognition so we are basically extracting what you eat and what you supplement, combine all of it together and as I discussed with Will, trying to build in a way digital twin for you and based on that providing to you the best intervention for you like a laser focus intervention,
Hi everyone, Dan Elton here. Like many of the people in my class, I got into machine learning and AI and eventually I got into doing research on AI for medical imaging at NIH. So I'll just jump into some of the applications that I've been working on and that I'm excited about. One of them is
Basically this idea of unlocking additional information in medical images and for each of these different Things we can you can get very precise measurements, which can be used for risk prediction So I've done some work on cardiovascular disease risk prediction using these biomarker measurements the application I want to talk to you about today this idea of using AI to help with help treat patients with chronic
I actually had long COVID for over a year. We were just talking about that with somebody. And when I had long COVID, I started reading a lot about chronic fatigue syndrome. It actually really blew my mind that actually about 2% of people worldwide have chronic fatigue syndrome, which is extremely debilitating and really reduces quality of life. But the thing is that the medical system is very poorly equipped to handle these sort of
persistent complex conditions. What we could do is before a patient goes to see a doctor, they could actually talk to an AI first. Basically, the point is that doctors really suck at handling these conditions. They only look at the biology, they're not equipped to handle the psychology and the sociology, and they're really overworked. They're only going to spend 40 minutes tops
And that's not nearly enough time to understand a complex condition. Um, so I think AI can really help fill those gaps. It could be AI deployed in the hospital or it could be something you just access from home. Hey everybody, Yvonne Barinholtz. I am a co-direct the MPCR lab with William. So I will, I'll state in sort of my, my position first, which is that I think that large language models, the, the underlying math of
What we have now in these models is not just a really cool piece of technology that we can do important things with. It is certainly that and we are going to do very important things with it. I think we should be very optimistic and hopeful about the extraordinary science that's going to come from
That's not what I'm personally most excited about. What I'm personally most excited about is that we have a brain in the jar. I don't know if I have the floor right now to sit and try to convince you, but my theory is that we've modeled something very fundamental about cognition itself, certainly language. And we've captured something so essential that we can now successfully model a human brain, not all of it, not the sensory pieces,
Not a lot of sort of the subcortical kind of activity, but the thinking cognitive brain is something we can now model directly in a system that's not us, that we can manipulate arbitrarily. We can do experiments with it that presumably, and again this is my conjecture, but will have implications for ourselves. And so one of the things, it's in my poster in any case, I believe that there's
Core insights into what the nature of memory is that is distinct from the way neuroscience has thought about it since its inception. And the insight comes from these models that these models seem to actually operate in a way that is probably synonymous with the way our brain functions in a certain sense. I'm sorry if that's a little abstract, but what we can do now that we could never have done as a species before is
Say we're going to take some analog to the human brain that can learn the kinds of things that we can do. We can test it the way we test ourselves and we can start to experiment with it. William and I were talking last night about a curriculum, right? You can test. There's a big debate. Math. Is math important? How many people think, raise your hands, how many people think that it's really important that we continue to teach children basic mathematics, arithmetic, long division,
How to deal with fractions factor all of that raise your hands Okay, how many people think maybe that's not important Okay, I'm not here to debate this Here's what I can later though. We can debate that. Here's my proposal. We can we can test that We can take a fresh brand new LLM we can train it on the entire curriculum that you would receive without the math and then you test it on other stuff, right because
We're not presumably you didn't raise your hands because, you know, when you're calculating the tip, you still need to be able to do a little bit of decimals. That's not what you meant. You meant there are broader implications for mathematical training that are going to spill over to other kinds of. Well, now we can test that. And the same for music, foreign language. Right. Now for something less controversial.
is this guy right here, who came all the way from Canada, which may or may not become part of the U.S. at some point. But right now it's his own thing. And why don't you tell people what you do and why you're here. Hi, I'm Kurt Jaimungal and I have this channel called Theories of Everything on YouTube. You can search it. It's a place where I interview professors and researchers on the latest theories. So for instance, Michael Levin on limb regeneration and his anthrobots and
Xenobots and Stephen Wolfram on his physics project. It's essentially what are the largest questions that there are in reality about nature, about meaning, about life, and then trying to answer them rigorously and speak to people who have theories about them, especially those that are new, but rigorous and technical. So I want to know what is the background of the audience here? Who here is in neuroscience? Raise your hand.
And who here is in math? All right. What about physics? All right. Well, I'm going to just ask a set of questions to these these people and then I'm going to open the floor to you all because they're the quality of the questions to Levin were so high. Yeah. So what can we learn, Elon, from LLMs about brain disease? So funny you should ask. So that's what I was about to talk about. And you're not allowed to use the word autoregressive.
Okay, so let me, can I first define autoregressive and then refer to it? No, I'm kidding. Anyway, so as I said, I think that there's the same math, the same computation that's happening. We won't name it, but that's happening in large language models. Basically what they're doing is they're taking in an input, right? It's a sequence. You ask them a question and then they just guess the very next word that they're supposed to say based on that question.
And then they take that word and they feed it back into the input sequence. So you have the question plus that word, and then they say, oh, what's the next one? And so this is fundamentally what they're doing. They're just guessing this next token. And my theory is that that is at least what human language is. I have lots of reasons that I believe that, but I don't know. I need to elaborate on them right now. But if that's the case, then when somebody is generating language, they what they have to do is retain
So what you're saying is you're an LLM?
I think it probably extends beyond linguistics. However, linguistics is at least important enough to say, can we model something that happens linguistically in people? And so what happens in dementia, of course, it's typically referred to as actually early on in dementia, you will see short term memory loss. So people won't be able to repeat back a sequence that you say to them in perfect order.
That's often referred to as a kind of memory loss. I understand it differently. I understand it as that there has to be some representations, activation. In order to guess that next token, you have to have some memory of what you've said before. But it's not meant for retrieval, it's meant for next token generation. So this model
makes a completely different interpretation of what dementia is. Instead of thinking dementia as you're missing information, you can't retrieve it anymore, it's that your brain is no longer retaining the activity that's necessary to generate the next token. What I'm doing experimentally is I'm building LLMs that have a shorter context window. So, instead of, you know, when you talk to Chachi BT or Claude, you can feed it a document
everything that it produces during the course of your conversation goes back into its input it turns through on every single cycle every single word it outputs it goes through the whole thing in humans we probably don't do that if i ask you what i said three sentences ago can you anybody repeat anyone my sentences probably not there isn't actually the information isn't necessary in order it isn't sufficient there to actually repeat it however if you note
You are able to follow what I'm saying. I'm talking about this idea of a context window how big it is Manipulating in the case of LLMs as a model of right you've got all of that So what we've got here is something like an LLM, but it has a very different feature to it It has this kind of K of activity of the activation It's not the entire prompt. It's going in every time. So this model of dementia is just to squeeze that window
and to say, okay, LM, you're not going to actually have the entire conversation we've had until now. You're going to have some much more, much reduced context that will allow you to do that next token generation. And I'm doing this by just, it's just a simple mathematical manipulation. You just say, here's what you've got to work with. But if you talk to these things, they get confused in just the way a dementia patient would.
And they try to make excuses for it. They're trying to interpret why is it that they'll kind of confabulate during conversations with them. So what that tells me is, oh, I think this might actually be a proper model that's going to actually explain what's going on in dementia such that we can now think about, well, what are the interventions one can do?
What does this completely new interpretation of what memory is tell us about how to potentially improve memory? So I'm sure you've heard of the diffusion LLMs. Oh, that's so interesting. You should mention that. So at first I was dismayed when I heard about these.
At this point, you can talk about what autoregression is and what the implications are for diffusion elements to your theory. As I mentioned, what elements are doing is they're trained, they're trained models, they're a network. A network takes in an input, it produces an output. The input and output that they are trained to do is, here's a sequence of words, and then all they're trained to do is then an output, the next word. It's called a token, maybe not exactly a word, but we could just say next word.
That's all they're doing. And then they take that word, stick it back onto the sequence, feed it through again. Same model, right? It's just got a new input. And now with the new input with a new word, it then produces yet another output. And it just does this sequentially. And that's autoregression. Autoregression just means that you're doing this sequential input output generation, then the output becomes part of sequence in some way. Okay.
I feel like I'm interrupting a very interesting explanation of autoregression and diffusion. I do want to answer that. We'll come back to it. Thank you, Addie, for interrupting a lot for me.
So I thought this idea that you have of essentially simulating, you're talking about the mind, maybe you're alluding to consciousness, but really what I think you're talking about is personality. Because when we think about dementia and we think about it through the model of a linguistic lens, you don't have access to what is happening in the brain, what is happening in the rest of one's physiology when you're thinking about it just linguistically, right? So you are able to test interventions
on
What do you think about the interpretation?
As you know, on Theories of Everything, we delve into some of the most reality-spiraling concepts from theoretical physics and consciousness to AI and emerging technologies. To stay informed, in an ever-evolving landscape, I see The Economist as a wellspring of insightful analysis and in-depth reporting on the various topics we explore here and beyond.
The Economist's commitment to rigorous journalism means you get a clear picture of the world's most significant developments, whether it's in scientific innovation or the shifting tectonic plates of global politics. The Economist provides comprehensive coverage that goes beyond the headlines. What sets the Economist apart is their ability to make complex issues accessible and engaging, much like we strive to do in this podcast.
If you're passionate about expanding your knowledge and gaining a deeper understanding of the forces that shape our world, then I highly recommend subscribing to The Economist. It's an investment into intellectual growth, one that you won't regret. As a listener of Toe, you get a special 20% off discount. Now you can enjoy The Economist and all it has to offer for less.
I think we only have evidence so far for language as being this kind of autoregressive. My grander theory, which I don't have enough of a leg to stand on yet, is that it's all that. And so personality is also just an autogenerative process.
There are many of them. There are visual ones, there's multi-sensory ones. They do different kind of works. I don't know about olfaction. I don't think we can think and smell. I have something to say about that. Anything you can think in is sort of is autoregressive.
actually really
Misha, go ask. I actually have a really interesting thing on the thinking and smell. If you look at what language is capable of and making an assumption, which is probably a bad assumption, that thought is linguistic in nature, then you'd expect that a corpus of text that's able to train an LLM would be able to handle those other senses in the same way that it can, say, describe something that it hasn't seen before visually and explain how to paint it. And it can explain how something sounds.
It indicates that people are very visual and auditory and language was created by people. If it was the dogs,
I'm not convinced that the LLMs can't think in terms of flavors and things like that. Just this weekend I was showing my brother what you can do with ChatGPT and coincidentally he had sent me a picture. He just went to a new market and he had gotten a whole bunch of different ingredients and he said, oh, I should ask ChatGPT. I said, just take a picture.
And he literally took a picture of the ingredients and asked what to make. And it gave him a very elaborate recipe. And at the end, he said it was 10 out of 10. He thought it was, he thought it was really fantastic. And it was a random set of ingredients and he was able to compile them. So I think if we fed them enough cookbooks, you know, there is, and again, it's, are they just kind of vacuuming up the human experience? Does it really ever understand what these cocktails are going to taste like? In the case of vision, you can see, right? You can feed them YouTube videos and they can think,
Okay, I want to ask Gil a question. So, Gil, you left academia for industry. Now, many people who stay in academia do so because there's research in academia and they see industry as that's, well, that's where you make money, but it's not where you perform research. But it's my understanding that you did both.
Can you please talk about that? Why is it that you left the academy? What is it that the universities do well and where do you see them lacking? That's a tough question. I left the academia because I felt like I wanted to have a big impact. And again, no complaints.
No bad thing about anyone in the, in the academia. But what I've seen at that time, it was long ago, is that basically you publish a paper that may be five or 10 or 20 or 50 people are reading and that's it. And the impact is a pretty small. If you look at the, and I wanted to translate more to, to everyone. And actually that's a, that's what I'm trying to do. My mission is to translate information, translate information for everyone.
And that's why I decided to move to the industry. And I think that doing research in the industry is not less exciting or less important than in the academia. For example, we have now a data set of hundreds of thousands of people with blood, DNA, fitness, stroke and
and some food recognition and some biological age and a lot of things that we discussed before about all of us, all of us, whoever doesn't know about it, it's like something that the US government is trying to do, at least one million and all of that, but it's take time, it's take money. Let me ask you a quick practical question. What are the markers? Because you test for markers, blood markers, maybe others.
What are three markers that people here at home should look at as indicators of their health that people are talking about? Yeah. So, so I can talk about blood markers, but there are other markers that are not blood. For example, a view to Max is a very important marker and it's not blood. You can hope on a, a, even a, your, your Apple watch can tell you that it's not very accurate. So that's one marker that's important. But if you were talking about a blood biomarkers,
I would say that it's a glucose or a one sees very important because it's showing whether you are going into the diabetic care out you have a one is sorry you have a apple be which is a marker more for cardiovascular diseases.
hscrp which was a marker of inflammation but there are a lot of markers i wouldn't say that those are the most important i think that it depends what what are you trying to to treat so what what my belief is everyone is a unique person and you need to define what are the issue that you have and then the right marker for the problem that you have let's define it and let's then try to see how can you attack it and improve yourself
I am Arif Dalvi. I direct Parkinson's Disease Center just 15 minutes down the road. But I had a question for Elon, the word confabulation. As a neurologist, we see that in a syndrome called Wernicke-Korsakov syndrome. So people who are alcoholics, they damage an area of the brain called the mammothalamic tract
and if you show them you just hold your hands like this and you say do you see the string not only will they see the string that is not there but they will describe it in great detail
So that area, as is true of everything in the brain, we don't know exactly what the mammulothalamic tract does, but it's sort of like an error checking part of the brain. So in terms of LLMs and confabulation, is that confabulation coming because of a lack of error? And can that be computationally solved? It is exactly the right kind of question you can now ask is you can you can ask this question computationally.
Where is the deficit? Assuming again that language generation is modeled by the system, we can break it in different ways and see if you get something characteristic of exactly what you're referring to. I think I have some ideas how you could induce exactly that and you would see very much this kind of confabulation. First of all, they're always confabulating. We have to get our minds that they don't
They're not reading the prompt. They're sort of guessing what is the appropriate response given this prompt. In some sense, what's happening in this case is it's over-guessing. It's going sort of beyond the data in a certain sense and the question is how do you model that in this kind of system. But yes, it's exactly similar to what Dr. Levin was talking about. We need to think about these things, I would say computational which is not a
His word is better, I think, psychologically, but I guess in some sense I think that when we were dealing with this particular kind of modeling,
We have
I have a theory of dreams now. The theory is that it's a short context window. You don't remember your activation system because you're actually generating. It's not the world, it's your mind creating the visual imagery and it's premising it on a much shorter window than you usually have. Everything is internally consistent. In the early days of deepfakes and deepfake videos in particular, it was really fun to watch.
Because they look very dreamlike. Because you'd have somebody riding on a motorcycle and suddenly they'd fly into the air on a rocket ship. And if you look at any three or four frames, it makes perfect sense. Right? And that's what dreams are. So this kind of, you know, the confabulation you're referring to could be thought of as representing exactly that. And now we can get into the guts of the system and say, well, let's modify that.
So is it a scale function? Like before LLMs, we had auto predict, auto correct on our iPhones. But now with LLMs, the scale has become so huge. Is that a function of just the scale? It is. So it's two things. It's the transformer model, which parallelizes instead of having to turn through the entire sequence, you just do it, boom, all at once. So parallelization and then GPU scale. So yes, that basic recipe plus scaling seems to be until the scaling laws break.
seems to be like the solution. Now, if nobody in this audience can ask the question, I'm ready for it, which is like, come on, this is not a model of the brain. We don't need all that data. That's not how we learn. We don't cram. We don't churn through, you know, billions of pieces of text by the time we learn to speak. So I think that right now there's an industry-wide, there's just a race to get to beat the benchmarks.
and what they know works is scaling and it does work and the scaling is fantastic but just because that solution is working right now doesn't mean you need all that scale and this is probably a tremendous amount of waste in both in terms of how much compute you need and also in terms of the curriculum as I call it they train these things just churn through all this data if we taught them you know children's stories first and then gravitated towards nuclear physics later on in their education maybe that they would get there sooner or something like that so yeah
Something that would be kind of interesting to see, right, is how many of you have tried to use AI, right? There's different ones out there, but which ones of you, slight pivot, have tried to diagnose some specific medical thing with you or someone you care about? Okay, we've all done this quite recently, I'm sure. Now, how many of you certified if what they said was actually correct? And it was correct? Okay. Most of the time it wasn't correct. Yeah.
I think that if we are talking about the LLM and the training set and all of that, I think that what is missing in the health, wellness, biology is the training set, because the internet is great to know the literature and the language and all of that, but the data of health, wellness, performance,
Is not is not there or what is there is a lot of a time is skewed by gurus that wrote a blog or something like that so it's not a really medical and I think that that's a.
I think that that's the next frontier. How do you build and I call it a model on top of the foundation model. So you need to build your own model on top of the foundation model and then train it and then use the LLM in a medical setting because you cannot make mistake or you can but it should be very rare the mistake because it's a life and death.
And one last thing before we get to the questions. Something else that I think about since in a lot of ways you're kind of in this middle ground between direct medicine practice, what people would do and sort of the research side as you know from inside tracker, right? And one of the things that I think about a lot is the rise of the so-called Google doctor, which we've all definitely done at some point or another in our life and may even still do it. Maybe now so long as the Google doctor, it's the chat GPT doctor.
The Biohacking and Longevity field were almost inspired by what the internet did.
Those approaches will be fundamentally impossible without the internet. Exactly. So now we want to think about what's impossible now that will be possible next year or the year after because of these new technologies. How are we going to change the way we think about health and longevity? As the microphone is traveling, can you talk about your swarm of medical AI views? Yeah, so the thing I've been working on recently that I'm very excited about
is getting an entire collection of these AI agents. And so imagine when you open up a tab for chat GPT and you have a conversation and you're accessing this model that is incredibly capable, as I'm sure you're aware of now. But this is like going to the deli and asking for one slice of baloney. Because while you're answering that question in the tab, everybody else basically on the plan is doing the same thing.
And so as impressive as these agents are, we need to remember you're kind of sharing. You're only getting one slice of the baloney. What is this going to look like? One, when you have the whole chat GPT to yourself. It's only a matter of time economically to where it gets to the point where you could afford to have the whole thing. And then it'll get to the point where you'll have two of them and you'll have three of them and then you'll have a thousand of them.
If you have a thousand agents all working for you, how do you prompt them? Do you set it up like a company? Do you tell the CEO of your swarm, here's what I want you to do, and they'll have a management team? Because imagine if we look at computers, the cost of memory, having thousands of bytes is a big deal, then millions of bytes is a big deal, now trillions is not a big deal. When they talk about tokens, you're going to be talking about mega tokens.
How many millions of tokens a second can the models output? And then you're going to have a million agents that can each output a million tokens per second. How do we anticipate and plan for that reality, which I think we're going to see relatively soon? And in the area of medicine, if you imagine if there's like kind of a VIP in the government,
And something happens to them. Literally hundreds of doctors are going to get put on call to help with that situation, right? The entire hospital floor will be we've seen it with Trump in a few years ago. Exactly. That's what I mean, right? The entire floor, the entire hospital is like we're helping this one person. None of us can afford that. We can't afford it right now. But if we look at this technology,
It'll be very reasonable to think that you have the brain power of a thousand physicians, and one's a radiologist, and one's an internal medicine, and one is talking about your psychology, and one is looking at what you ate for breakfast and so on. And they'll all have this dialogue. And imagine kind of like a medical conference, like we have the Society for Neuroscience, I think it's 30,000 people show up to that annual meeting. Imagine having that kind of horsepower for you.
For one little thing, maybe not even a life-threatening situation, you say, I just want to feel better.
One comment on that.
I think that that's the in my opinion that's the the real deal the the agent and the swarm on agent and tell you why because to get to diagnosis is not a problem we know everyone know that we shouldn't eat a package food and we should exercise and we should sleep at least seven to nine hours and we should do a lot of things that we know but 95 percent of us are not doing it and one of the reason it's hard
And if you have those agents and they for example.
100%. Yeah.
So AI is very good at taking a whole lot of data, putting it together and coming out with an output. But AI can't do anything with the subtleties of humanity. We have a Parkinson's neurologist. So can AI pick up a masked face?
a change in voice. AI can't do physical examinations. So AI could just spit back data that was put into it. So that's a great point. And it comes to this sort of different eras of AI. And so now we're in this era where we have these language models and we can talk to them in English. And I would argue, we could debate this, that they're approaching artificial general intelligence if they're not there already. But what we've got to remember is while that model can't examine a face or listen to
to the voice changes. There are other, what used to be called narrow AI models that have been developed for the last 10, 15 years, some longer, that can do that. Now, they're very cumbersome pieces of code, they're not easy to work with, and you cannot talk to them in English.
But in the next couple years, those two kind of fields are going to be merging these traditional deep learning type AI and the language model. And then I think they're absolutely going to be able to look for phase groups and voice changes and shuffle gate analysis and things like that. And how about understanding the subtleties of emotion? Many times a patient will come into your office,
And basically you can have to pull information out of them and you've got to understand emotionally how that patient is to ask the proper questions there may be a proper questions i could ask based on a sequence but it cannot pick up a motion.
I think it's debatable whether or not they can analyze emotion. I would argue that they can. There's a new one that just came out. I'm sure some of you have seen it. We wanted to run it later. It's the sesame and it's incredibly good.
As long as we can get the data for it which we have in response to the doctor here
Um, I don't think AI is going to replace doctors, uh, right away, even though it's getting in areas like radiology and other areas, it's getting better than the average doctor because what's going to happen is that the doctor is going to take on, um, uh, is, is mostly going to spend a lot more time with patients one on one interpreting the AI outputs for the patients and providing that personalized, uh, human connection. And, uh, so, so.
Yes, some doctors probably will be replaced outright, but I think there will be a role for some time. Right now we are building some benchmarks and one benchmark has that the latest AI actually has better EQ than 62% of physicians and it will probably get to about 90%. There is a cultural bias
uh... so in japan it's uh... where they trust machines a little bit more than they do in america it's actually somewhat higher than it is here but is actually very competitive now with physicians i'd like to hear more about the diffusion model autoregressive is very old timey at this point it's like months old now
So, but if we can talk about diffusion, I think that's going to unleash a lot of capability. So the reason why I was excited about that. So diffusion is you can do this, you can do it in parallel. So the thing about auto-aggression is it's slow. It's inherently slow because what you have to do is you have to produce the output, then rerun it. It's inherently sequential and that's bad in terms of runtime.
Diffusion models can do this sort of thing in parallel. Diffusion models just try to figure out the entire sequence all at once. Let me just back up real quick. I think language is inherently autoregressive, like language itself. The only way to solve it is to do it autoregressively. My prediction is that you're not going to be able to do fusion. People may have read that diffusion models are doing a pretty good job.
But it's on code. It's on code. And code is a human artifact that's not built autoregressively. It's not how it functions. The syntax is not actually built for that. That's not how it's generated. In the case of language, I don't think we're ever going to get... So my prediction is you're not going to be able to do it in parallel. It has to be done. The language itself contains within it this autoregressive predictability. That's how you have to do it.
We also have different modalities with which we experience the world, this sort of classic left versus right brain paradigm and one part of us sees the world in this sequential
Serial kind of fashion and the other part of our mind sees things all at once like recognizing a face We don't really decompose that is sort of just there and I suspect that maybe these different mechanisms that we're discovering in technology Auto regression and diffusion. Maybe these are both valid models We just need to be looking in different aspects of the brain to find them
You know, three things. First is healthcare has unstructured data and labor data in terms of having your model actually turn from detection to diagnosis. There's a thin line of difference, right? So when you say diagnosis, you're talking about CPT codes, you're talking about FDA approvals. So my question is very, very simple. Number one is
How can AI LLMs and all the models can structure and label data for next generation of researchers? My second question to you is what's the workflow process in terms of taking AI to actually clinics and healthcare in terms of making a workflow, getting FDA approval, putting it into a device trial, having FDA come and see what you did and getting it through. That's how you can put it in healthcare administration. My third question is
What challenges do you see? Because convincing doctors, as he said, is very, very correct, right? The thin line of difference between diagnosis and detection, we can claim that it can detect, but claiming diagnosis is a very big thing. What is the insurance process? That's a different industry we're talking about, right? Yeah, I can talk about that because I actually worked at Mass General Brigham for several years, and I was actually working on deploying AI into the radiology clinic
I can talk about the FDA briefly. They've approved over a thousand AI tools. Most of them are not commercially successful because hospitals don't have any money to spend on this sort of thing and they're basically beholden on to the insurance companies to pay all the bills. You mentioned the CPT codes. There are actually, I think, two CPT codes for AI
um, but generally insurance doesn't cover any kind of a use of AI. So there's a, um, it's very challenging actually for hospitals to, to find money for, for AI. And that's one of the reasons it's going to be a lot slower that the percolation of development deployment of the technology is going to be a lot slower. You'll probably have a AI doctors at home.
that you're using before actually in the hospital. But the other thing is the FDA. Another reason it's going to be slow getting into the hospital is because the FDA has not released any guidance on more general AI. I actually think that people are going to be using these general AI doctors at home
Unfortunately long before they're actually used in the clinic one comment I had about what the doctor just said, you know I am as a physician worried about losing my job and I thought that you know It is going to be the pixel based people who will lose their jobs radiologists pathologists as a neurologist that there's so much touchy-feely diagnosis going on I wouldn't be at risk and
And then I consulted with a startup somewhere in Illinois. They were sending me videos of patients with essential tremor and Parkinson's and I was diagnosing based on the video and they were actually looking at facial recognition and voice of the patient.
and
But there's a lot of untapped medical data for example i go to the or even as a neurologist to map the brain for the brain stimulation. Just a second one comment on what you said about replacement a physician think about the self driving car we're talking about it for the last time twenty maybe fifty years and it's still not here.
So and i think that what clinician is doing and i have a high respect to clinician is a bit more than the driving so i don't think that it will see it in our generation that's one the second is as some other people said the human touch is very important and the people want the human touch i build the insta tracker was built completely automated and scalable
I just wanted to make one comment if I may as a physician actually to my physician colleague here. I think
the talk about sort of physical examination being something AI can't do, that's very much part of the current or even the sort of historic paradigm of medicine. But because it's really a proxy for what's going on inside, and what imaging is showing us another diagnostic technique. So while physical examination remains important, I think it's going to the new paradigm of medicine will actually be, you know, based on large data sets, etc, combined with imaging. And I think
It's probably in my mind, it's probably wrong to focus on the lack of his examination thinking, you know, AI won't be able to take over from what doctors do, because I think it will just become less important as other more accurate methods or sort of methodologies, you know, take over that. I was gonna say that the current way we spend that the economy of medicine is dependent on this sort of diagnostic code
infrastructure.
But they can think more subtly and they can make predictions that would say with this genetic profile and this cardiovascular measures and your current lifestyle and all of that stuff that frankly human doctors are not equipped to deal with as a data stream. They can't really think those things. It will be able to and it will be able to say, you know what, you should reduce sugar a little bit and you should exercise 15 minutes more a week.
Why? And maybe the insurance company wants to know why, right? And it'll say because the data says that. But it's more than that. I think that today if you look at what clinicians doing and you have some clinician here, so correct me if I'm wrong, but basically you see that the EMR and the entering a lot of information, the 15 minutes gone and that's it. The next patient is coming.
I think that what will happen now or in the future is you have all the information in your computer or tablet or whatever and you have time to understand the user and also to provide the intervention that will work for him. There are some people we know that now we show that the people that are fat are basically lazy. They are sick and they have no GLP-1 for that.
uh... so we i think that will they start to divide the population for more and more buckets and then maybe they i will help us to know what bucket is this patient and then the job of the clinician will be to communicate it and to help him to implement the intervention in terms of chronic disease of prevention eighty percent of chronic diseases can be prevented but how well can a i
predict and and model i guess the dynamic interactions happening with multi i guess factorial chronic disease like for example someone who has diabetes who may develop cardiovascular disease chronic kidney disease how well can ai differentiate between the biomarkers from one particular you know disease versus
The other like looking at all together instead of just one thing at once for the multi-agent. Yeah, I think that Let's start with what happening right now. Our health care system is basically sicker The clinician is the one a treat you or won't look at you or kick you out of the office if you're not sick Okay, and we need to move to a prevention prevention system basically starting to looking at
I went to my clinician and told her that my APOB is BTI. She told me, what is APOB? I'm in Boston, so it's not like, I don't know, in Alabama. So we have a problem with a clinician that were trained or learned like 30, 40 years ago and they know what they know. They are very busy. They are poor people. I really don't want to replace any clinician.
Working hard and they're doing their best, but they don't have time to go to PubMed and read a new paper. So I think that what the AI can do is to send them a summary or even look at you when you are coming to the practice and immediately send the AI is great in summary data. Look at all the data available of all the stories that you have.
And all the medical data that they're available and give him one sentence one a power graph that he can read to you and basically explain to you where you are and then provide you some intervention that you can do. Industry and employment for the younger generation is going to be totally skewed and flipped as time progresses so what can we the younger generation do to prepare for all this.
Yeah, fantastic question. Historically, the answer was to specialize and to find a particular niche and go into that, whether it's in medicine or in general, to just be a world expert in a very, very narrow thing.
That I don't think is going to be competitive anymore compared to having a broad landscape view of what's actually happening. So I would imagine I would I would encourage you to run into these tools as fast as you can try them the thing we're talking about earlier with vibe coding the ability to create software.
Is it gonna explode right now we don't realize it but software is extraordinarily expensive. And there's lots of apps and tools and things that we would love to have as an individual companies in your practice. And you have to rely on the software industry to create them because a single phone app you'll cost half a million box to get out the door.
That's gonna completely change so i don't think we're gonna need less software developers i think we're gonna need more software developers than ever before and it's gonna change i have a stack of punch cards. In my office cuz they were here at fau we had an early IBM computer here. At the time at one point that's what it meant to be a computer programmer you would literally down in the ones and zeros and now we can think in english.
The dream of computer science from the nineteen fifties that we just talk in plain language and we get software out of it and i think this in some sense is the killer app of l l m's is that they can write code they can create working software and historically that took teams of really trained talented people to do and now we're getting this kind of literacy. This is real quick if we go back to the middle ages there was no word there was no concept of being illiterate.
There was no expectation that most people would be illiterate. The idea was that you had this professional class of scribe and that they took care of that for you. Nowadays, we don't have a word for ill health or it because there's a specialized class of physicians and it's your job to tell me what it is. And I don't bother with that. I don't think we're going to be as comfortable outsourcing that.
We're going to take responsibility for that medicine, and we're going to need the AI tools to do that. So we're going to be able to create these custom apps, and we're going to be able to create this customized, personalized medicine for everybody. Yeah, a couple of comments about that. I think that's a very good question. So as the founder of a company, I can tell you that you have a lot of prioritization of what the team will do, and you're maybe doing maybe 1%, maybe even less than what you want to do.
So I completely agree with Will. We'll need more software developers. What the software developer will be, they will be much more efficient and will do more. And for example, if you are testing a model and we're trying to find what is the best model for a specific question, instead of looking at two different ways, we'll take a look at 20 different ways and we'll do it in half time. Now, another analogy is to
Compare it to the industrial revolution that happened, I don't know, 100 years ago. So then a lot of farmers basically lost their job because one combine can cover, I know, 1,000 people. But if you think about the code, the code is not limited. The land in the house is limited. The code is not limited. So basically, in my opinion, we'll need more coders and basically everyone here, even the biologists and I know everyone can be a coder right now.
I have two quick comments. Sorry, I have to interject, but just
About the software engineering thing, I would not do a degree in computer science just because computer science is mostly going to teach you about theory of algorithms and all this stuff that's not very useful in my opinion. Unless you're really interested in that and you want to try to be a professor, I wouldn't do that.
And I think a lot of software engineering jobs will be replaced. So I would definitely consider a different career. But I wanted to mention this thing about EQ. People are mentioning AI is way better emotional intelligence. I think the study people were referring to was done specifically with looking at responding to messages in a patient portal.
And, you know, because the doctors are really overworked, they tend to give very terse responses to those to those messages, or as the AI can give, you know, gives more nuanced sort of empathetic responses. So in that context, yes, the AI is, is much more pleasant to talk to more aware of emotions, but
I don't think
Applying more health care resources. People seem to consume health care resources at a much higher rate than you would expect. There's diminishing returns. They showed people are doubling the amount of health care you utilize doesn't improve your outcomes.
So there's this puzzle like, why are people consuming so much health care? And he argues it's because people have this emotional need to feel like they're being cared for. So really what they're doing is they're fulfilling that emotional need. I don't know if AI is, maybe people will get that from AI, but. I want to ask a question. Recently, health plans and large entities such as pharmaceutical companies, et cetera,
And also very large shop practices which are owned by hospitals et cetera are putting in their contracts provisions that prevent the distribution of patient data. In other words who owns the data so this evening is built around data the issue is not that so much cuz date is getting consumed.
It's who owns that data and is it going to be released? Is it primary data? Is it secondary data? Is it peer-reviewed data? Under HIPAA, patients do have a right to get their data, but what I observed when I was at Mass General Brigham and other hospitals is
It can be very challenging, especially with the radiology images. If anyone has tried to get one of the radiology images, it can take a long time and they have to give you like a CD-ROM or a DVD. It's very challenging. And I actually think the hospitals are making it even harder because they're realizing the value of their data. And they're also more reluctant to share data with researchers, I think because they're realizing that
You know, the data has enormous value. I think that big, big companies like Microsoft and Google are working with hospitals to have agreements where they can basically bring in an LM or foundation model and train it on all of the data. So I think that's how it will be done. But
If a patient wants to get all their data and upload it to some sort of cloud service like ChatTPT, unfortunately, I think it's way harder than it should be.
Just real quick, while I think it's amazing, you know, going back and looking at these data sets, it's essentially kind of saying you've got all these great leftovers in the fridge. We collected this data for some other purpose and let's go harvest and mine it. And that's fantastic. But I think now that we understand the value of these tools, we're going to be collecting data at a scale that we've never seen before. Whereas before we took a few data points. Now, one of the things I've been thinking about is this sort of the Star Trek replicator, or not the replicator, the tricorder.
You swing this smartphone-style device in front of someone and it's capturing all kinds of data. I think we need to be thinking about how do we harvest the value of the data we've already spent the money to collect, but maybe more importantly, how do we revolutionize the healthcare system so that it's actually capturing the data, structuring it, and putting it in a way that will help... I think we've got an important data collection question right over here.
Hi everyone, hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my sub stack. Subscribers get first access to new episodes, new posts as well, behind the scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers.
Whenever someone, for example, gets sick and he mentioned who owns the data, there's already massive amounts of
For example, CAT scans available for people who may have passed from certain illnesses or survived and their remission as they progress through the disease.
So what is that data doing right now? Is it just sitting dormant somewhere and it'd be actually utilized? Well, like with the medical imaging data, the hospital does own it in the sense that they can use it for commercial purposes and they can use it for research. They can use it for a lot of things. Like I said before, there are big companies like Google, Microsoft that are working out contracts with hospitals to essentially train GPT-4
on all of the radiology images and all of the text reports. And I think if you train something like GPT-4 on all of the images in a large healthcare system and all of the reports, I actually think you probably have something at the level of a radiologist. I saw this South Korean company named Vuno who's very adept in like AI and radiology. So I wanted to know what are roadblocks here in the US that's stopping us from doing the same thing.
The FDA is one of the strictest regulatory bodies in the world. It's almost more exciting to think about some of the emerging countries where things can be deployed more readily and there's actually more need. I'm not surprised that you're seeing AI in other countries just because of the FDA.
to the physician's point of social determinants of health. It actually contributes 30% to a patient's health outcomes, 30%. And a person can't change the zip code overnight where he stays. And it says that where the patient lives, zip codes determine how long the patient will live to some extent. It is unfortunate, but it is the truth. From that perspective,
The stress of living in that particular zip code affects the person at a molecular level and a cellular level. So what are we going to do about those things? Move to the right zip code. At least, you know, has anyone here tried to get a therapist? It's very hard. My point at the beginning of this was that doctors
Um, they're not trained. They're only trained in, uh, anatomy, physiology and the biological component. They're not very good at handling, uh, the, how to change behavior and psychological stuff, but it seems AI can, um, can provide a sort of counseling. I mean, behavior, we do know how to change.
change behavior. That's one of the things in cognitive behavioral therapy, right? It just takes a lot of time talking. And like most people I know that are sick, like they do want to get better. Like people with chronic fatigue, like I was talking about the two to 4% of people with chronic fatigue, they desperately want to get better. It's just very, if they go to it, when they go to the traditional healthcare system, the doctors have no idea how to treat it because they have such a complex condition.
Okay, let's hear from Dan van Zandt. Yeah. So I think we have sort of two perspectives on AI. We have the skeptical perspective of like, oh, well, AI won't solve this problem or it won't solve that problem. We have the hype perspective of this is going to change everything. But I think practically my view is it's not going to solve all the problems in medicine and society, but it's going to make incremental progress on some of those. I'm curious where each of the speakers in the panel see as the best target in the immediate future. I'm talking the next three months.
Maybe if you know, a company was sufficiently motivated where AI could make incremental progress on a specific problem in medicine, not solving all of the issues with society and zip codes and everything else, but it's very specific thing that you see AI as being really helpful towards. Okay, Elon, I kind of want to split my answers. But I think I think this sort of AI first, maybe, I don't know if this is gonna happen in the next three months, but we're
People are able to intelligently sort of pre-diagnose and come in, decide, make their preliminary medical decisions like, should I go to the doctor or not, in a more informed and intelligent way, potentially. I don't see which industry and which company is harnessing that exactly. So it's hard to see the profit motive, which means it doesn't ever happen, right? I guess OpenAI themselves can see this as a utility.
and then they can sell that as a utility. But I do think that there is a very strong possibility in the very near future, I'm hopeful, that we're going to just see straight up AI breakthroughs in terms of actual hypotheses. You might know a little bit about this. The deeper search and these other engines, I haven't gotten access to it yet, but Coscientist, which is a Google product,
is
We talked about the data and data ownership. It's very possible that companies that are actually sitting on silos of data, they don't want to share them. They need the profit motive, but they can then use these tools maybe to actually develop novel drug interventions and things like that. So I think that soon we'll probably will see some actual effective practice of science using these. OK, now the three months, the next three months in AI, what's a specific problem that can be solved? Don't paint the whole
Picture of the world changing in 35 years. What's the next three months looking like? Well, I think it's just the ability for everybody to get their own physician, essentially, this kind of concierge medicine that only the very wealthy can afford. To some sense, we look at how much health care costs, you know, at $6 trillion price tags as a culture, we can't afford it. We have to change something about that. And so I think the opportunity is
to
Maybe not even the current version, but the promise of these systems is not just that they will know internal medicine and cardiology, but they will be therapists. They will become your friend. They will know how to talk to you. They'll be your dietician. They'll be your gym coach and this whole thing. And so they're going to be talking to you about your blood pressure, but talking about it in the context of, hey, you know, you could save some money this week on your grocery bill if we make these recipes. It will be so kind of holistic because it's polymath. It's not just a physician. It's all of these things at once. And I think that's
That's something you can do today. You can go into the prompt and say, can you please simulate being the following 10 things and talk to me what I should do today? Yeah, I will start with a joke and then I will. So a joke about AI. So a CEO of a company wrote a few bullet points that explain the vision of the company. And then one of the executives said, no, no, that's too short. Let's please extend it. So using AI and wrote 10 pages.
And then one of the employees received the 10 pages and they asked the AI okay summarize it to five bullet points and you got the same. So I think that that's the power of AI. Basically take a big data and summarize it and take it to the point. So I think that we can and maybe in the next three months and it's available today to take all the PubMed and all the information and come to a patient with five bullet points. That's what you need to do today.
Yeah and just a similar thing is kind of what I was talking about before is the AI could interview the patient before they see the doctor and then summarize some of the most important points instead of just doing an intake form that you know is kind of very grand is very like the data is very compressed into like a five point scale like how's your sleep zero to five
The AI can spend more time exploring like that and then capture the salient points and review all the medical history as well. So yeah, I think chart summarization is a very low-hanging fruit application we're already seeing. I didn't want to say something. So this is a medicine and AI event. We have lots of different types of people here. They all have incredible amounts of skills and set of errors. But I think something important to sum up for today is we want to talk to all the physicians in the audience, people like Dr. Dalvi.
who spent years of their life, arguably the last years of their youth, giving up a part of themselves so that people like us could live healthier today and even have this position. Some of us may even reach centenarian status in this room, or if we hit longevity, escape velocity, hopefully more. But I want to take a little bit of time today to all of the physicians in the audience, the people that gave up their lives to save ours. What do you think about AI? How would you use AI? Are you against it? I'll give it to Dr. Dalby.
Thank you firstly for inviting me and thanks to the panel for a phenomenal discussion. I've used chat GPT in a practical sense. For example, one of the problems with neurology, it's one of the high burnout professions because there's so much documentation involved.
I've created custom GPTs where I will type in a paragraph worth of what I've seen in the clinic and talk to the patient about and then it will convert it into a level five note that is billable and I get my money for it. And the advantage is I don't take the computer into the patient's room. I don't use EMR. I have
contact with the patients so they are very happy. I'm very happy and my hospital billing system is happy. So that's a practical use. It's also a phenomenal research tool. So I have used it, you know, if I'm writing a paper instead of reviewing PubMed article after article, I'll use it to pull out the five seven that are really relevant to me and get started.
it's also for me personally it's a good teacher i've become suddenly in my old age interested in philosophy so i asked claude ai's to imagine where dostoevsky and tolstoy and tergenev my favorite russian authors are discussing the myth of sisyphus and it came up with a little short story that explained it to me like i have no professor would have so phenomenal uses practical as well as intellectual awesome thank you dr dolby
Well, I love AI. I think it's wonderful. I think it's definitely going to change a lot of things. I use it nearly every day. As a matter of fact, working on the gentleman that was here with me tonight, working on a new company that's going to be using AI in an area of healthcare that has not been addressed at all, which is the long-term care arena. We have an aging population that has not been addressed very well.
I think it's wonderful. I commend all of you guys for doing it. Please keep it up.
I'm not afraid of losing my job and I would tell anybody a hundred years from now they're not going to lose their job because you need still the human mind, you need the interaction, you need the touch, you need everything else that's going to go along with it. Unless you're going to have a Mr. Data from Star Trek, it's going to be hard to eliminate that. I appreciate all you guys. By the way, what would you say, we actually have an MD candidate here who came all the way from Boston just for this event.
What would you say to people right now who are the next generations of physicians, the next generation? Well, I mean, I don't know. I was just talking to one of my colleagues, the gentleman that had the other neurologist there. And we just see medical education is going in a different direction. It's all about now passing a test. OK. And, you know, I have other friends who are teaching it right here at the at the university.
at the medical school and have complained about the students are not focusing on the clinical aspect of it, the oscillation, the percussion, all of the different things that we did. And that's what you need to do, because that's going to make you a good doctor. It's going to make you a good clinician, good diagnostician. OK, passing a test is one thing. Yeah, that's great. OK, and you will. You will. I guarantee you will. But you've got to go the distance, right? Because that's a person, right? That's a person.
That's a person all uniquely different and you have to treat them as such. Future doctor right here. I'm a medical student in Boston. I go to Boston University School of Medicine in their accelerated BAMD program. In terms of AI, I'm definitely interested in terms of the way it influences our medical education right now because a lot of students are using AI to teach them the material.
Because it can teach them better than some of their professors, unfortunately, and also just due to the amount of time constraint, like you're saying, passing exams is very hard. So that's just another area to think about in terms of clinical AIs and terms of medical education. What are your concerns with AI? AI in medicine? In the clinic or for patients at home?
I think it's important to think about when you're stuck on the phone with one of those robotic voices, and they're not listening to you and you're trying really hard to explain something. AI has come a long way, it is very good at reaching states that are close to those which I would say are most intimate with our phenomenology.
I would say I'm very bullish on AI from two perspectives really.
One is offering really the promise of universal health care, which currently isn't available to many people in this country or around the world because of cost and access and availability. And I think to have, as one of the panel members mentioned, to bring that cost of consultation down to near zero is going to be an incredible thing. And many people suffer because they don't have access to pretty basic diagnosis and treatment. And I think that's an incredible opportunity.
I think the other great opportunity is just using these very large data sets, getting more insight into complex diseases and, you know, drug development and those sorts of things. And also longevity, you know, this sort of emerging science of the, you know, both lifestyle intervention and, you know, drug interventions for longevity. And I think a lot of these are very complex problems that AI is very well suited to tackling. Even just plain trial and error. Do you think
that we could use AI to solve not only just unusual medical diseases, but come up with very unusual solutions to them that we never thought about before. Well, I think that's absolutely right. I think one sort of anecdote I'd give you is I remember as a medical student, obviously, you know, and you'll know this, you're sort of taught how to take a history and examine it. It's a very, it's a very sort of structured approach to getting information from the patient both
reported information physical information and i remember my in england we call them consultants but attending doctors here and they would basically say you know they would hear the first one or two lines of that story and they would immediately jump to the answer and that's what we call experience but that's also what ai will be brilliant at because when it's had millions and millions and millions of trainings
Thank you. Why don't we have a theory of everything for medicine or biology? This is something that's been chased after forever in mathematics and physics. Why isn't it there in biology yet? And by having something like that, but that also let us not have to die in the way that we do anymore, lose the people we care about.
Yeah, I just want to real quick. I want to thank Kurt for traveling. Always makes the discussion quite interesting. And I want to thank Addie. You did a tremendous amount of work organizing our speaker. I've received several messages, emails and comments from professors saying that they recommend theories of everything to their students. And that's fantastic. If you're a professor or lecturer and there's a particular standout episode that your students can benefit from, please do share. And as always, feel free to contact me.
new update started a sub stack writings on there are currently about language and ill-defined concepts as well as some other mathematical details
Much more being written there. This is content that isn't anywhere else. It's not on theories of everything. It's not on Patreon. Also full transcripts will be placed there at some point in the future. Several people ask me, Hey Kurt, you've spoken to so many people in the fields of theoretical physics, philosophy and consciousness. What are your thoughts? While I remain impartial in interviews, this sub stack is a way to peer into my present deliberations on these topics. Also,
Thank you to our partner, The Economist. Firstly, thank you for watching, thank you for listening. If you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself, plus it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm,
Which means that whenever you share on Twitter, say on Facebook or even on Reddit, et cetera, it shows YouTube. Hey, people are talking about this content outside of YouTube, which in turn greatly aids the distribution on YouTube. Thirdly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts.
I also read in the comments
and donating with whatever you like there's also paypal there's also crypto there's also just joining on youtube again keep in mind it's support from the sponsors and you that allow me to work on toe full time you also get early access to ad free episodes whether it's audio or video it's audio in the case of patreon video in the case of youtube for instance this episode that you're listening to right now was released a few days earlier every dollar helps far more than you think
Either way, your viewership is generosity enough. Thank you so much.
▶ View Full JSON Data (Word-Level Timestamps)
{
"source": "transcribe.metaboat.io",
"workspace_id": "AXs1igz",
"job_seq": 2821,
"audio_duration_seconds": 5086.73,
"completed_at": "2025-11-30T21:39:33Z",
"segments": [
{
"end_time": 26.203,
"index": 0,
"start_time": 0.009,
"text": " The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how they impact markets. They recently published a piece on China's new neutrino detector. They cover extending life via mitochondrial transplants, creating an entirely new field of medicine. But it's also not just science, they analyze culture, they analyze finance, economics, business, international affairs across every region."
},
{
"end_time": 53.234,
"index": 1,
"start_time": 26.203,
"text": " I'm particularly liking their new insider feature was just launched this month it gives you gives me a front row access to the economist internal editorial debates where senior editors argue through the news with world leaders and policy makers and twice weekly long format shows basically an extremely high quality podcast whether it's scientific innovation or shifting global politics the economist provides comprehensive coverage beyond headlines."
},
{
"end_time": 64.514,
"index": 2,
"start_time": 53.558,
"text": " As a TOE listener, you get a special discount. Head over to economist.com slash TOE to subscribe. That's economist.com slash TOE for your discount."
},
{
"end_time": 91.408,
"index": 3,
"start_time": 66.169,
"text": " If you imagine a VIP in the government and something happens to them, literally hundreds of doctors are gonna get put on call. None of us can afford that. But if we look at this technology, it'll be very reasonable to think that you have the brain power of a thousand physicians. What is this gonna look like when you have the whole chat GPT to yourself? It's only a matter of time where you can afford to have a thousand agents all working for you."
},
{
"end_time": 118.285,
"index": 4,
"start_time": 93.933,
"text": " What if every person on Earth could have access to a personal team of medical specialists, not just one doctor, but hundreds working in concert available 24-7 and at virtually no cost? In this episode, we explore how AI is revolutionizing healthcare through what experts call medical swarms, so armies of specialized agents that could make concierge medicine available to everyone."
},
{
"end_time": 141.527,
"index": 5,
"start_time": 118.473,
"text": " Joining us are a distinguished panel of experts, AI Professor William Hahn, who's the founder of FAU's Machine Perception and Cognitive Robotics Lab and returning guest of this channel. Here he is discussing Wolfram and Consciousness. Link in the description. There's also Dr. Gil Blander, an MIT Aging Research veteran and founder of Insight Tracker. There's also Dr. Dan Elton,"
},
{
"end_time": 161.834,
"index": 6,
"start_time": 141.766,
"text": " an NIH scientist developing AI for medical imaging and AI professor Elon Berenholtz, founder of Florida Atlantic University's Machine Perception Cognitive Robotics Lab. My name is Kurt Jaimungal, and I'm honored to have been invited to moderate today's Polymath Medical Salon at FAU's Gruber AI Sandbox."
},
{
"end_time": 183.66,
"index": 7,
"start_time": 161.834,
"text": " Keynoted by pioneer biologist Michael Levin, here's his presentation from Polymath, link in the description. Curated by Adi Cha of Ecolopto, they held the first Longevity Hackathon at MIT with Augmentation Lab featuring Toe guests like Stephen Wolfram, Joschabach, Manolis Kellis. And this was also curated by academic and medical philanthropist Ruben Gruber."
},
{
"end_time": 208.729,
"index": 8,
"start_time": 183.66,
"text": " Good evening, everybody."
},
{
"end_time": 229.923,
"index": 9,
"start_time": 209.002,
"text": " We're really excited that you guys are here for this kickoff event. We're here in the Machine Perception Cognitive Robotics Laboratory, something I started with Elon here about 10 years ago. And then we expanded into this beautiful space we call the AI Sandbox. So first and foremost, he's not going to like it, but we have to thank Ruben Gruber back here. Give it up for Ruben. He made this all possible."
},
{
"end_time": 254.497,
"index": 10,
"start_time": 233.626,
"text": " It's not just for building us this beautiful space that enables events like this, but in this particular event for serving as a catalyst for what we think is a very exciting space that's combining the improvements we've seen with artificial intelligence with some of the oldest goals in the world, namely medicine and longevity."
},
{
"end_time": 284.821,
"index": 11,
"start_time": 255.128,
"text": " And we believe that this is going to be one of the killer or rather life saving applications for artificial intelligence. So I'm very excited here to share. We're going to have a nice dialogue with the panel, but we want your input. The whole point is to create a conversation factory and to have a loop that's connecting the practitioners, the physicians, the people who need care, the people who give care, and all of the amazing young people here that are going to go out and use these tools"
},
{
"end_time": 307.637,
"index": 12,
"start_time": 284.821,
"text": " to make those things happen. Go ahead for Dr. Han. Thank you. You may have seen me running around practically like a chicken without a head for basically the whole event. My name is Adi Shah. I run this thing called Ecolopto, which is Greek for incubate if I got Google translate correctly. And I basically run a research institute and this research institute is kind of a"
},
{
"end_time": 329.462,
"index": 13,
"start_time": 308.114,
"text": " Somewhat of a sloppy excuse for me to be able to better understand why we're here on this planet. What is purpose? Why are we here? What are the unknowns, unknowns of biology, math, physics, right? All these types of things. But if I had to sum up Echolopto in one word, beyond just doing hackathons, research hackathons and salons like these and, and you know, all sorts of things, right, is that I want to be able to"
},
{
"end_time": 353.558,
"index": 14,
"start_time": 330.128,
"text": " Hello, my name is Gil Blander."
},
{
"end_time": 365.367,
"index": 15,
"start_time": 353.882,
"text": " Can with the idea of the company that I want to start the company name is inside trucker and what we're trying to do is helping human to live better longer based on what's happening inside the body."
},
{
"end_time": 394.974,
"index": 16,
"start_time": 365.896,
"text": " So we are looking at a lot of variable we're looking at around 50 blood biomarkers, DNA, data from wearables, we even added the food recognition so we are basically extracting what you eat and what you supplement, combine all of it together and as I discussed with Will, trying to build in a way digital twin for you and based on that providing to you the best intervention for you like a laser focus intervention,"
},
{
"end_time": 422.995,
"index": 17,
"start_time": 394.974,
"text": " Hi everyone, Dan Elton here. Like many of the people in my class, I got into machine learning and AI and eventually I got into doing research on AI for medical imaging at NIH. So I'll just jump into some of the applications that I've been working on and that I'm excited about. One of them is"
},
{
"end_time": 453.609,
"index": 18,
"start_time": 423.626,
"text": " Basically this idea of unlocking additional information in medical images and for each of these different Things we can you can get very precise measurements, which can be used for risk prediction So I've done some work on cardiovascular disease risk prediction using these biomarker measurements the application I want to talk to you about today this idea of using AI to help with help treat patients with chronic"
},
{
"end_time": 484.07,
"index": 19,
"start_time": 454.701,
"text": " I actually had long COVID for over a year. We were just talking about that with somebody. And when I had long COVID, I started reading a lot about chronic fatigue syndrome. It actually really blew my mind that actually about 2% of people worldwide have chronic fatigue syndrome, which is extremely debilitating and really reduces quality of life. But the thing is that the medical system is very poorly equipped to handle these sort of"
},
{
"end_time": 512.841,
"index": 20,
"start_time": 484.787,
"text": " persistent complex conditions. What we could do is before a patient goes to see a doctor, they could actually talk to an AI first. Basically, the point is that doctors really suck at handling these conditions. They only look at the biology, they're not equipped to handle the psychology and the sociology, and they're really overworked. They're only going to spend 40 minutes tops"
},
{
"end_time": 542.517,
"index": 21,
"start_time": 513.626,
"text": " And that's not nearly enough time to understand a complex condition. Um, so I think AI can really help fill those gaps. It could be AI deployed in the hospital or it could be something you just access from home. Hey everybody, Yvonne Barinholtz. I am a co-direct the MPCR lab with William. So I will, I'll state in sort of my, my position first, which is that I think that large language models, the, the underlying math of"
},
{
"end_time": 572.261,
"index": 22,
"start_time": 542.739,
"text": " What we have now in these models is not just a really cool piece of technology that we can do important things with. It is certainly that and we are going to do very important things with it. I think we should be very optimistic and hopeful about the extraordinary science that's going to come from"
},
{
"end_time": 601.681,
"index": 23,
"start_time": 572.483,
"text": " That's not what I'm personally most excited about. What I'm personally most excited about is that we have a brain in the jar. I don't know if I have the floor right now to sit and try to convince you, but my theory is that we've modeled something very fundamental about cognition itself, certainly language. And we've captured something so essential that we can now successfully model a human brain, not all of it, not the sensory pieces,"
},
{
"end_time": 631.476,
"index": 24,
"start_time": 601.971,
"text": " Not a lot of sort of the subcortical kind of activity, but the thinking cognitive brain is something we can now model directly in a system that's not us, that we can manipulate arbitrarily. We can do experiments with it that presumably, and again this is my conjecture, but will have implications for ourselves. And so one of the things, it's in my poster in any case, I believe that there's"
},
{
"end_time": 661.237,
"index": 25,
"start_time": 631.817,
"text": " Core insights into what the nature of memory is that is distinct from the way neuroscience has thought about it since its inception. And the insight comes from these models that these models seem to actually operate in a way that is probably synonymous with the way our brain functions in a certain sense. I'm sorry if that's a little abstract, but what we can do now that we could never have done as a species before is"
},
{
"end_time": 689.582,
"index": 26,
"start_time": 661.834,
"text": " Say we're going to take some analog to the human brain that can learn the kinds of things that we can do. We can test it the way we test ourselves and we can start to experiment with it. William and I were talking last night about a curriculum, right? You can test. There's a big debate. Math. Is math important? How many people think, raise your hands, how many people think that it's really important that we continue to teach children basic mathematics, arithmetic, long division,"
},
{
"end_time": 719.65,
"index": 27,
"start_time": 690.009,
"text": " How to deal with fractions factor all of that raise your hands Okay, how many people think maybe that's not important Okay, I'm not here to debate this Here's what I can later though. We can debate that. Here's my proposal. We can we can test that We can take a fresh brand new LLM we can train it on the entire curriculum that you would receive without the math and then you test it on other stuff, right because"
},
{
"end_time": 741.92,
"index": 28,
"start_time": 720.384,
"text": " We're not presumably you didn't raise your hands because, you know, when you're calculating the tip, you still need to be able to do a little bit of decimals. That's not what you meant. You meant there are broader implications for mathematical training that are going to spill over to other kinds of. Well, now we can test that. And the same for music, foreign language. Right. Now for something less controversial."
},
{
"end_time": 771.493,
"index": 29,
"start_time": 742.551,
"text": " is this guy right here, who came all the way from Canada, which may or may not become part of the U.S. at some point. But right now it's his own thing. And why don't you tell people what you do and why you're here. Hi, I'm Kurt Jaimungal and I have this channel called Theories of Everything on YouTube. You can search it. It's a place where I interview professors and researchers on the latest theories. So for instance, Michael Levin on limb regeneration and his anthrobots and"
},
{
"end_time": 801.715,
"index": 30,
"start_time": 771.715,
"text": " Xenobots and Stephen Wolfram on his physics project. It's essentially what are the largest questions that there are in reality about nature, about meaning, about life, and then trying to answer them rigorously and speak to people who have theories about them, especially those that are new, but rigorous and technical. So I want to know what is the background of the audience here? Who here is in neuroscience? Raise your hand."
},
{
"end_time": 833.029,
"index": 31,
"start_time": 803.234,
"text": " And who here is in math? All right. What about physics? All right. Well, I'm going to just ask a set of questions to these these people and then I'm going to open the floor to you all because they're the quality of the questions to Levin were so high. Yeah. So what can we learn, Elon, from LLMs about brain disease? So funny you should ask. So that's what I was about to talk about. And you're not allowed to use the word autoregressive."
},
{
"end_time": 866.101,
"index": 32,
"start_time": 836.613,
"text": " Okay, so let me, can I first define autoregressive and then refer to it? No, I'm kidding. Anyway, so as I said, I think that there's the same math, the same computation that's happening. We won't name it, but that's happening in large language models. Basically what they're doing is they're taking in an input, right? It's a sequence. You ask them a question and then they just guess the very next word that they're supposed to say based on that question."
},
{
"end_time": 896.067,
"index": 33,
"start_time": 866.647,
"text": " And then they take that word and they feed it back into the input sequence. So you have the question plus that word, and then they say, oh, what's the next one? And so this is fundamentally what they're doing. They're just guessing this next token. And my theory is that that is at least what human language is. I have lots of reasons that I believe that, but I don't know. I need to elaborate on them right now. But if that's the case, then when somebody is generating language, they what they have to do is retain"
},
{
"end_time": 913.797,
"index": 34,
"start_time": 896.544,
"text": " So what you're saying is you're an LLM?"
},
{
"end_time": 942.824,
"index": 35,
"start_time": 914.258,
"text": " I think it probably extends beyond linguistics. However, linguistics is at least important enough to say, can we model something that happens linguistically in people? And so what happens in dementia, of course, it's typically referred to as actually early on in dementia, you will see short term memory loss. So people won't be able to repeat back a sequence that you say to them in perfect order."
},
{
"end_time": 969.701,
"index": 36,
"start_time": 943.507,
"text": " That's often referred to as a kind of memory loss. I understand it differently. I understand it as that there has to be some representations, activation. In order to guess that next token, you have to have some memory of what you've said before. But it's not meant for retrieval, it's meant for next token generation. So this model"
},
{
"end_time": 999.121,
"index": 37,
"start_time": 971.135,
"text": " makes a completely different interpretation of what dementia is. Instead of thinking dementia as you're missing information, you can't retrieve it anymore, it's that your brain is no longer retaining the activity that's necessary to generate the next token. What I'm doing experimentally is I'm building LLMs that have a shorter context window. So, instead of, you know, when you talk to Chachi BT or Claude, you can feed it a document"
},
{
"end_time": 1026.869,
"index": 38,
"start_time": 999.872,
"text": " everything that it produces during the course of your conversation goes back into its input it turns through on every single cycle every single word it outputs it goes through the whole thing in humans we probably don't do that if i ask you what i said three sentences ago can you anybody repeat anyone my sentences probably not there isn't actually the information isn't necessary in order it isn't sufficient there to actually repeat it however if you note"
},
{
"end_time": 1055.776,
"index": 39,
"start_time": 1026.937,
"text": " You are able to follow what I'm saying. I'm talking about this idea of a context window how big it is Manipulating in the case of LLMs as a model of right you've got all of that So what we've got here is something like an LLM, but it has a very different feature to it It has this kind of K of activity of the activation It's not the entire prompt. It's going in every time. So this model of dementia is just to squeeze that window"
},
{
"end_time": 1082.756,
"index": 40,
"start_time": 1056.459,
"text": " and to say, okay, LM, you're not going to actually have the entire conversation we've had until now. You're going to have some much more, much reduced context that will allow you to do that next token generation. And I'm doing this by just, it's just a simple mathematical manipulation. You just say, here's what you've got to work with. But if you talk to these things, they get confused in just the way a dementia patient would."
},
{
"end_time": 1106.425,
"index": 41,
"start_time": 1084.053,
"text": " And they try to make excuses for it. They're trying to interpret why is it that they'll kind of confabulate during conversations with them. So what that tells me is, oh, I think this might actually be a proper model that's going to actually explain what's going on in dementia such that we can now think about, well, what are the interventions one can do?"
},
{
"end_time": 1133.882,
"index": 42,
"start_time": 1106.834,
"text": " What does this completely new interpretation of what memory is tell us about how to potentially improve memory? So I'm sure you've heard of the diffusion LLMs. Oh, that's so interesting. You should mention that. So at first I was dismayed when I heard about these."
},
{
"end_time": 1162.415,
"index": 43,
"start_time": 1134.718,
"text": " At this point, you can talk about what autoregression is and what the implications are for diffusion elements to your theory. As I mentioned, what elements are doing is they're trained, they're trained models, they're a network. A network takes in an input, it produces an output. The input and output that they are trained to do is, here's a sequence of words, and then all they're trained to do is then an output, the next word. It's called a token, maybe not exactly a word, but we could just say next word."
},
{
"end_time": 1187.773,
"index": 44,
"start_time": 1163.131,
"text": " That's all they're doing. And then they take that word, stick it back onto the sequence, feed it through again. Same model, right? It's just got a new input. And now with the new input with a new word, it then produces yet another output. And it just does this sequentially. And that's autoregression. Autoregression just means that you're doing this sequential input output generation, then the output becomes part of sequence in some way. Okay."
},
{
"end_time": 1216.817,
"index": 45,
"start_time": 1188.2,
"text": " I feel like I'm interrupting a very interesting explanation of autoregression and diffusion. I do want to answer that. We'll come back to it. Thank you, Addie, for interrupting a lot for me."
},
{
"end_time": 1245.009,
"index": 46,
"start_time": 1217.193,
"text": " So I thought this idea that you have of essentially simulating, you're talking about the mind, maybe you're alluding to consciousness, but really what I think you're talking about is personality. Because when we think about dementia and we think about it through the model of a linguistic lens, you don't have access to what is happening in the brain, what is happening in the rest of one's physiology when you're thinking about it just linguistically, right? So you are able to test interventions"
},
{
"end_time": 1267.176,
"index": 47,
"start_time": 1245.23,
"text": " on"
},
{
"end_time": 1277.125,
"index": 48,
"start_time": 1267.534,
"text": " What do you think about the interpretation?"
},
{
"end_time": 1300.811,
"index": 49,
"start_time": 1277.483,
"text": " As you know, on Theories of Everything, we delve into some of the most reality-spiraling concepts from theoretical physics and consciousness to AI and emerging technologies. To stay informed, in an ever-evolving landscape, I see The Economist as a wellspring of insightful analysis and in-depth reporting on the various topics we explore here and beyond."
},
{
"end_time": 1325.435,
"index": 50,
"start_time": 1301.254,
"text": " The Economist's commitment to rigorous journalism means you get a clear picture of the world's most significant developments, whether it's in scientific innovation or the shifting tectonic plates of global politics. The Economist provides comprehensive coverage that goes beyond the headlines. What sets the Economist apart is their ability to make complex issues accessible and engaging, much like we strive to do in this podcast."
},
{
"end_time": 1347.142,
"index": 51,
"start_time": 1325.435,
"text": " If you're passionate about expanding your knowledge and gaining a deeper understanding of the forces that shape our world, then I highly recommend subscribing to The Economist. It's an investment into intellectual growth, one that you won't regret. As a listener of Toe, you get a special 20% off discount. Now you can enjoy The Economist and all it has to offer for less."
},
{
"end_time": 1376.203,
"index": 52,
"start_time": 1347.142,
"text": " I think we only have evidence so far for language as being this kind of autoregressive. My grander theory, which I don't have enough of a leg to stand on yet, is that it's all that. And so personality is also just an autogenerative process."
},
{
"end_time": 1395.657,
"index": 53,
"start_time": 1376.51,
"text": " There are many of them. There are visual ones, there's multi-sensory ones. They do different kind of works. I don't know about olfaction. I don't think we can think and smell. I have something to say about that. Anything you can think in is sort of is autoregressive."
},
{
"end_time": 1409.121,
"index": 54,
"start_time": 1396.084,
"text": " actually really"
},
{
"end_time": 1436.203,
"index": 55,
"start_time": 1409.667,
"text": " Misha, go ask. I actually have a really interesting thing on the thinking and smell. If you look at what language is capable of and making an assumption, which is probably a bad assumption, that thought is linguistic in nature, then you'd expect that a corpus of text that's able to train an LLM would be able to handle those other senses in the same way that it can, say, describe something that it hasn't seen before visually and explain how to paint it. And it can explain how something sounds."
},
{
"end_time": 1458.217,
"index": 56,
"start_time": 1436.493,
"text": " It indicates that people are very visual and auditory and language was created by people. If it was the dogs,"
},
{
"end_time": 1482.363,
"index": 57,
"start_time": 1458.643,
"text": " I'm not convinced that the LLMs can't think in terms of flavors and things like that. Just this weekend I was showing my brother what you can do with ChatGPT and coincidentally he had sent me a picture. He just went to a new market and he had gotten a whole bunch of different ingredients and he said, oh, I should ask ChatGPT. I said, just take a picture."
},
{
"end_time": 1512.142,
"index": 58,
"start_time": 1483.507,
"text": " And he literally took a picture of the ingredients and asked what to make. And it gave him a very elaborate recipe. And at the end, he said it was 10 out of 10. He thought it was, he thought it was really fantastic. And it was a random set of ingredients and he was able to compile them. So I think if we fed them enough cookbooks, you know, there is, and again, it's, are they just kind of vacuuming up the human experience? Does it really ever understand what these cocktails are going to taste like? In the case of vision, you can see, right? You can feed them YouTube videos and they can think,"
},
{
"end_time": 1535.845,
"index": 59,
"start_time": 1512.449,
"text": " Okay, I want to ask Gil a question. So, Gil, you left academia for industry. Now, many people who stay in academia do so because there's research in academia and they see industry as that's, well, that's where you make money, but it's not where you perform research. But it's my understanding that you did both."
},
{
"end_time": 1557.466,
"index": 60,
"start_time": 1536.425,
"text": " Can you please talk about that? Why is it that you left the academy? What is it that the universities do well and where do you see them lacking? That's a tough question. I left the academia because I felt like I wanted to have a big impact. And again, no complaints."
},
{
"end_time": 1585.026,
"index": 61,
"start_time": 1557.671,
"text": " No bad thing about anyone in the, in the academia. But what I've seen at that time, it was long ago, is that basically you publish a paper that may be five or 10 or 20 or 50 people are reading and that's it. And the impact is a pretty small. If you look at the, and I wanted to translate more to, to everyone. And actually that's a, that's what I'm trying to do. My mission is to translate information, translate information for everyone."
},
{
"end_time": 1606.783,
"index": 62,
"start_time": 1585.555,
"text": " And that's why I decided to move to the industry. And I think that doing research in the industry is not less exciting or less important than in the academia. For example, we have now a data set of hundreds of thousands of people with blood, DNA, fitness, stroke and"
},
{
"end_time": 1632.159,
"index": 63,
"start_time": 1607.022,
"text": " and some food recognition and some biological age and a lot of things that we discussed before about all of us, all of us, whoever doesn't know about it, it's like something that the US government is trying to do, at least one million and all of that, but it's take time, it's take money. Let me ask you a quick practical question. What are the markers? Because you test for markers, blood markers, maybe others."
},
{
"end_time": 1661.118,
"index": 64,
"start_time": 1632.466,
"text": " What are three markers that people here at home should look at as indicators of their health that people are talking about? Yeah. So, so I can talk about blood markers, but there are other markers that are not blood. For example, a view to Max is a very important marker and it's not blood. You can hope on a, a, even a, your, your Apple watch can tell you that it's not very accurate. So that's one marker that's important. But if you were talking about a blood biomarkers,"
},
{
"end_time": 1677.619,
"index": 65,
"start_time": 1661.118,
"text": " I would say that it's a glucose or a one sees very important because it's showing whether you are going into the diabetic care out you have a one is sorry you have a apple be which is a marker more for cardiovascular diseases."
},
{
"end_time": 1707.739,
"index": 66,
"start_time": 1677.875,
"text": " hscrp which was a marker of inflammation but there are a lot of markers i wouldn't say that those are the most important i think that it depends what what are you trying to to treat so what what my belief is everyone is a unique person and you need to define what are the issue that you have and then the right marker for the problem that you have let's define it and let's then try to see how can you attack it and improve yourself"
},
{
"end_time": 1730.179,
"index": 67,
"start_time": 1709.189,
"text": " I am Arif Dalvi. I direct Parkinson's Disease Center just 15 minutes down the road. But I had a question for Elon, the word confabulation. As a neurologist, we see that in a syndrome called Wernicke-Korsakov syndrome. So people who are alcoholics, they damage an area of the brain called the mammothalamic tract"
},
{
"end_time": 1739.974,
"index": 68,
"start_time": 1730.589,
"text": " and if you show them you just hold your hands like this and you say do you see the string not only will they see the string that is not there but they will describe it in great detail"
},
{
"end_time": 1766.681,
"index": 69,
"start_time": 1740.486,
"text": " So that area, as is true of everything in the brain, we don't know exactly what the mammulothalamic tract does, but it's sort of like an error checking part of the brain. So in terms of LLMs and confabulation, is that confabulation coming because of a lack of error? And can that be computationally solved? It is exactly the right kind of question you can now ask is you can you can ask this question computationally."
},
{
"end_time": 1793.183,
"index": 70,
"start_time": 1767.159,
"text": " Where is the deficit? Assuming again that language generation is modeled by the system, we can break it in different ways and see if you get something characteristic of exactly what you're referring to. I think I have some ideas how you could induce exactly that and you would see very much this kind of confabulation. First of all, they're always confabulating. We have to get our minds that they don't"
},
{
"end_time": 1822.073,
"index": 71,
"start_time": 1793.541,
"text": " They're not reading the prompt. They're sort of guessing what is the appropriate response given this prompt. In some sense, what's happening in this case is it's over-guessing. It's going sort of beyond the data in a certain sense and the question is how do you model that in this kind of system. But yes, it's exactly similar to what Dr. Levin was talking about. We need to think about these things, I would say computational which is not a"
},
{
"end_time": 1831.63,
"index": 72,
"start_time": 1822.619,
"text": " His word is better, I think, psychologically, but I guess in some sense I think that when we were dealing with this particular kind of modeling,"
},
{
"end_time": 1854.633,
"index": 73,
"start_time": 1831.954,
"text": " We have"
},
{
"end_time": 1883.029,
"index": 74,
"start_time": 1855.077,
"text": " I have a theory of dreams now. The theory is that it's a short context window. You don't remember your activation system because you're actually generating. It's not the world, it's your mind creating the visual imagery and it's premising it on a much shorter window than you usually have. Everything is internally consistent. In the early days of deepfakes and deepfake videos in particular, it was really fun to watch."
},
{
"end_time": 1904.462,
"index": 75,
"start_time": 1883.285,
"text": " Because they look very dreamlike. Because you'd have somebody riding on a motorcycle and suddenly they'd fly into the air on a rocket ship. And if you look at any three or four frames, it makes perfect sense. Right? And that's what dreams are. So this kind of, you know, the confabulation you're referring to could be thought of as representing exactly that. And now we can get into the guts of the system and say, well, let's modify that."
},
{
"end_time": 1933.763,
"index": 76,
"start_time": 1904.718,
"text": " So is it a scale function? Like before LLMs, we had auto predict, auto correct on our iPhones. But now with LLMs, the scale has become so huge. Is that a function of just the scale? It is. So it's two things. It's the transformer model, which parallelizes instead of having to turn through the entire sequence, you just do it, boom, all at once. So parallelization and then GPU scale. So yes, that basic recipe plus scaling seems to be until the scaling laws break."
},
{
"end_time": 1960.247,
"index": 77,
"start_time": 1934.138,
"text": " seems to be like the solution. Now, if nobody in this audience can ask the question, I'm ready for it, which is like, come on, this is not a model of the brain. We don't need all that data. That's not how we learn. We don't cram. We don't churn through, you know, billions of pieces of text by the time we learn to speak. So I think that right now there's an industry-wide, there's just a race to get to beat the benchmarks."
},
{
"end_time": 1990.265,
"index": 78,
"start_time": 1960.555,
"text": " and what they know works is scaling and it does work and the scaling is fantastic but just because that solution is working right now doesn't mean you need all that scale and this is probably a tremendous amount of waste in both in terms of how much compute you need and also in terms of the curriculum as I call it they train these things just churn through all this data if we taught them you know children's stories first and then gravitated towards nuclear physics later on in their education maybe that they would get there sooner or something like that so yeah"
},
{
"end_time": 2020.145,
"index": 79,
"start_time": 1991.22,
"text": " Something that would be kind of interesting to see, right, is how many of you have tried to use AI, right? There's different ones out there, but which ones of you, slight pivot, have tried to diagnose some specific medical thing with you or someone you care about? Okay, we've all done this quite recently, I'm sure. Now, how many of you certified if what they said was actually correct? And it was correct? Okay. Most of the time it wasn't correct. Yeah."
},
{
"end_time": 2041.852,
"index": 80,
"start_time": 2020.486,
"text": " I think that if we are talking about the LLM and the training set and all of that, I think that what is missing in the health, wellness, biology is the training set, because the internet is great to know the literature and the language and all of that, but the data of health, wellness, performance,"
},
{
"end_time": 2053.643,
"index": 81,
"start_time": 2042.005,
"text": " Is not is not there or what is there is a lot of a time is skewed by gurus that wrote a blog or something like that so it's not a really medical and I think that that's a."
},
{
"end_time": 2076.647,
"index": 82,
"start_time": 2054.258,
"text": " I think that that's the next frontier. How do you build and I call it a model on top of the foundation model. So you need to build your own model on top of the foundation model and then train it and then use the LLM in a medical setting because you cannot make mistake or you can but it should be very rare the mistake because it's a life and death."
},
{
"end_time": 2102.978,
"index": 83,
"start_time": 2077.261,
"text": " And one last thing before we get to the questions. Something else that I think about since in a lot of ways you're kind of in this middle ground between direct medicine practice, what people would do and sort of the research side as you know from inside tracker, right? And one of the things that I think about a lot is the rise of the so-called Google doctor, which we've all definitely done at some point or another in our life and may even still do it. Maybe now so long as the Google doctor, it's the chat GPT doctor."
},
{
"end_time": 2119.565,
"index": 84,
"start_time": 2102.978,
"text": " The Biohacking and Longevity field were almost inspired by what the internet did."
},
{
"end_time": 2146.988,
"index": 85,
"start_time": 2119.838,
"text": " Those approaches will be fundamentally impossible without the internet. Exactly. So now we want to think about what's impossible now that will be possible next year or the year after because of these new technologies. How are we going to change the way we think about health and longevity? As the microphone is traveling, can you talk about your swarm of medical AI views? Yeah, so the thing I've been working on recently that I'm very excited about"
},
{
"end_time": 2175.128,
"index": 86,
"start_time": 2147.346,
"text": " is getting an entire collection of these AI agents. And so imagine when you open up a tab for chat GPT and you have a conversation and you're accessing this model that is incredibly capable, as I'm sure you're aware of now. But this is like going to the deli and asking for one slice of baloney. Because while you're answering that question in the tab, everybody else basically on the plan is doing the same thing."
},
{
"end_time": 2201.015,
"index": 87,
"start_time": 2175.828,
"text": " And so as impressive as these agents are, we need to remember you're kind of sharing. You're only getting one slice of the baloney. What is this going to look like? One, when you have the whole chat GPT to yourself. It's only a matter of time economically to where it gets to the point where you could afford to have the whole thing. And then it'll get to the point where you'll have two of them and you'll have three of them and then you'll have a thousand of them."
},
{
"end_time": 2225.333,
"index": 88,
"start_time": 2201.869,
"text": " If you have a thousand agents all working for you, how do you prompt them? Do you set it up like a company? Do you tell the CEO of your swarm, here's what I want you to do, and they'll have a management team? Because imagine if we look at computers, the cost of memory, having thousands of bytes is a big deal, then millions of bytes is a big deal, now trillions is not a big deal. When they talk about tokens, you're going to be talking about mega tokens."
},
{
"end_time": 2246.886,
"index": 89,
"start_time": 2225.794,
"text": " How many millions of tokens a second can the models output? And then you're going to have a million agents that can each output a million tokens per second. How do we anticipate and plan for that reality, which I think we're going to see relatively soon? And in the area of medicine, if you imagine if there's like kind of a VIP in the government,"
},
{
"end_time": 2271.237,
"index": 90,
"start_time": 2247.278,
"text": " And something happens to them. Literally hundreds of doctors are going to get put on call to help with that situation, right? The entire hospital floor will be we've seen it with Trump in a few years ago. Exactly. That's what I mean, right? The entire floor, the entire hospital is like we're helping this one person. None of us can afford that. We can't afford it right now. But if we look at this technology,"
},
{
"end_time": 2296.323,
"index": 91,
"start_time": 2271.715,
"text": " It'll be very reasonable to think that you have the brain power of a thousand physicians, and one's a radiologist, and one's an internal medicine, and one is talking about your psychology, and one is looking at what you ate for breakfast and so on. And they'll all have this dialogue. And imagine kind of like a medical conference, like we have the Society for Neuroscience, I think it's 30,000 people show up to that annual meeting. Imagine having that kind of horsepower for you."
},
{
"end_time": 2306.527,
"index": 92,
"start_time": 2296.698,
"text": " For one little thing, maybe not even a life-threatening situation, you say, I just want to feel better."
},
{
"end_time": 2332.705,
"index": 93,
"start_time": 2310.111,
"text": " One comment on that."
},
{
"end_time": 2360.418,
"index": 94,
"start_time": 2333.097,
"text": " I think that that's the in my opinion that's the the real deal the the agent and the swarm on agent and tell you why because to get to diagnosis is not a problem we know everyone know that we shouldn't eat a package food and we should exercise and we should sleep at least seven to nine hours and we should do a lot of things that we know but 95 percent of us are not doing it and one of the reason it's hard"
},
{
"end_time": 2364.548,
"index": 95,
"start_time": 2360.845,
"text": " And if you have those agents and they for example."
},
{
"end_time": 2392.432,
"index": 96,
"start_time": 2364.753,
"text": " 100%. Yeah."
},
{
"end_time": 2408.695,
"index": 97,
"start_time": 2392.773,
"text": " So AI is very good at taking a whole lot of data, putting it together and coming out with an output. But AI can't do anything with the subtleties of humanity. We have a Parkinson's neurologist. So can AI pick up a masked face?"
},
{
"end_time": 2438.114,
"index": 98,
"start_time": 2409.036,
"text": " a change in voice. AI can't do physical examinations. So AI could just spit back data that was put into it. So that's a great point. And it comes to this sort of different eras of AI. And so now we're in this era where we have these language models and we can talk to them in English. And I would argue, we could debate this, that they're approaching artificial general intelligence if they're not there already. But what we've got to remember is while that model can't examine a face or listen to"
},
{
"end_time": 2455.538,
"index": 99,
"start_time": 2438.541,
"text": " to the voice changes. There are other, what used to be called narrow AI models that have been developed for the last 10, 15 years, some longer, that can do that. Now, they're very cumbersome pieces of code, they're not easy to work with, and you cannot talk to them in English."
},
{
"end_time": 2478.456,
"index": 100,
"start_time": 2456.152,
"text": " But in the next couple years, those two kind of fields are going to be merging these traditional deep learning type AI and the language model. And then I think they're absolutely going to be able to look for phase groups and voice changes and shuffle gate analysis and things like that. And how about understanding the subtleties of emotion? Many times a patient will come into your office,"
},
{
"end_time": 2495.247,
"index": 101,
"start_time": 2478.456,
"text": " And basically you can have to pull information out of them and you've got to understand emotionally how that patient is to ask the proper questions there may be a proper questions i could ask based on a sequence but it cannot pick up a motion."
},
{
"end_time": 2516.988,
"index": 102,
"start_time": 2495.247,
"text": " I think it's debatable whether or not they can analyze emotion. I would argue that they can. There's a new one that just came out. I'm sure some of you have seen it. We wanted to run it later. It's the sesame and it's incredibly good."
},
{
"end_time": 2543.933,
"index": 103,
"start_time": 2517.346,
"text": " As long as we can get the data for it which we have in response to the doctor here"
},
{
"end_time": 2573.729,
"index": 104,
"start_time": 2544.514,
"text": " Um, I don't think AI is going to replace doctors, uh, right away, even though it's getting in areas like radiology and other areas, it's getting better than the average doctor because what's going to happen is that the doctor is going to take on, um, uh, is, is mostly going to spend a lot more time with patients one on one interpreting the AI outputs for the patients and providing that personalized, uh, human connection. And, uh, so, so."
},
{
"end_time": 2600.111,
"index": 105,
"start_time": 2574.258,
"text": " Yes, some doctors probably will be replaced outright, but I think there will be a role for some time. Right now we are building some benchmarks and one benchmark has that the latest AI actually has better EQ than 62% of physicians and it will probably get to about 90%. There is a cultural bias"
},
{
"end_time": 2621.425,
"index": 106,
"start_time": 2600.401,
"text": " uh... so in japan it's uh... where they trust machines a little bit more than they do in america it's actually somewhat higher than it is here but is actually very competitive now with physicians i'd like to hear more about the diffusion model autoregressive is very old timey at this point it's like months old now"
},
{
"end_time": 2648.234,
"index": 107,
"start_time": 2621.954,
"text": " So, but if we can talk about diffusion, I think that's going to unleash a lot of capability. So the reason why I was excited about that. So diffusion is you can do this, you can do it in parallel. So the thing about auto-aggression is it's slow. It's inherently slow because what you have to do is you have to produce the output, then rerun it. It's inherently sequential and that's bad in terms of runtime."
},
{
"end_time": 2678.575,
"index": 108,
"start_time": 2649.019,
"text": " Diffusion models can do this sort of thing in parallel. Diffusion models just try to figure out the entire sequence all at once. Let me just back up real quick. I think language is inherently autoregressive, like language itself. The only way to solve it is to do it autoregressively. My prediction is that you're not going to be able to do fusion. People may have read that diffusion models are doing a pretty good job."
},
{
"end_time": 2703.677,
"index": 109,
"start_time": 2679.377,
"text": " But it's on code. It's on code. And code is a human artifact that's not built autoregressively. It's not how it functions. The syntax is not actually built for that. That's not how it's generated. In the case of language, I don't think we're ever going to get... So my prediction is you're not going to be able to do it in parallel. It has to be done. The language itself contains within it this autoregressive predictability. That's how you have to do it."
},
{
"end_time": 2726.766,
"index": 110,
"start_time": 2704.855,
"text": " We also have different modalities with which we experience the world, this sort of classic left versus right brain paradigm and one part of us sees the world in this sequential"
},
{
"end_time": 2748.183,
"index": 111,
"start_time": 2726.766,
"text": " Serial kind of fashion and the other part of our mind sees things all at once like recognizing a face We don't really decompose that is sort of just there and I suspect that maybe these different mechanisms that we're discovering in technology Auto regression and diffusion. Maybe these are both valid models We just need to be looking in different aspects of the brain to find them"
},
{
"end_time": 2769.616,
"index": 112,
"start_time": 2748.183,
"text": " You know, three things. First is healthcare has unstructured data and labor data in terms of having your model actually turn from detection to diagnosis. There's a thin line of difference, right? So when you say diagnosis, you're talking about CPT codes, you're talking about FDA approvals. So my question is very, very simple. Number one is"
},
{
"end_time": 2798.268,
"index": 113,
"start_time": 2769.616,
"text": " How can AI LLMs and all the models can structure and label data for next generation of researchers? My second question to you is what's the workflow process in terms of taking AI to actually clinics and healthcare in terms of making a workflow, getting FDA approval, putting it into a device trial, having FDA come and see what you did and getting it through. That's how you can put it in healthcare administration. My third question is"
},
{
"end_time": 2826.135,
"index": 114,
"start_time": 2798.268,
"text": " What challenges do you see? Because convincing doctors, as he said, is very, very correct, right? The thin line of difference between diagnosis and detection, we can claim that it can detect, but claiming diagnosis is a very big thing. What is the insurance process? That's a different industry we're talking about, right? Yeah, I can talk about that because I actually worked at Mass General Brigham for several years, and I was actually working on deploying AI into the radiology clinic"
},
{
"end_time": 2855.128,
"index": 115,
"start_time": 2826.698,
"text": " I can talk about the FDA briefly. They've approved over a thousand AI tools. Most of them are not commercially successful because hospitals don't have any money to spend on this sort of thing and they're basically beholden on to the insurance companies to pay all the bills. You mentioned the CPT codes. There are actually, I think, two CPT codes for AI"
},
{
"end_time": 2876.886,
"index": 116,
"start_time": 2855.384,
"text": " um, but generally insurance doesn't cover any kind of a use of AI. So there's a, um, it's very challenging actually for hospitals to, to find money for, for AI. And that's one of the reasons it's going to be a lot slower that the percolation of development deployment of the technology is going to be a lot slower. You'll probably have a AI doctors at home."
},
{
"end_time": 2902.534,
"index": 117,
"start_time": 2877.261,
"text": " that you're using before actually in the hospital. But the other thing is the FDA. Another reason it's going to be slow getting into the hospital is because the FDA has not released any guidance on more general AI. I actually think that people are going to be using these general AI doctors at home"
},
{
"end_time": 2928.319,
"index": 118,
"start_time": 2903.319,
"text": " Unfortunately long before they're actually used in the clinic one comment I had about what the doctor just said, you know I am as a physician worried about losing my job and I thought that you know It is going to be the pixel based people who will lose their jobs radiologists pathologists as a neurologist that there's so much touchy-feely diagnosis going on I wouldn't be at risk and"
},
{
"end_time": 2945.282,
"index": 119,
"start_time": 2928.507,
"text": " And then I consulted with a startup somewhere in Illinois. They were sending me videos of patients with essential tremor and Parkinson's and I was diagnosing based on the video and they were actually looking at facial recognition and voice of the patient."
},
{
"end_time": 2960.52,
"index": 120,
"start_time": 2945.282,
"text": " and"
},
{
"end_time": 2981.852,
"index": 121,
"start_time": 2960.52,
"text": " But there's a lot of untapped medical data for example i go to the or even as a neurologist to map the brain for the brain stimulation. Just a second one comment on what you said about replacement a physician think about the self driving car we're talking about it for the last time twenty maybe fifty years and it's still not here."
},
{
"end_time": 3005.077,
"index": 122,
"start_time": 2982.176,
"text": " So and i think that what clinician is doing and i have a high respect to clinician is a bit more than the driving so i don't think that it will see it in our generation that's one the second is as some other people said the human touch is very important and the people want the human touch i build the insta tracker was built completely automated and scalable"
},
{
"end_time": 3028.985,
"index": 123,
"start_time": 3005.384,
"text": " I just wanted to make one comment if I may as a physician actually to my physician colleague here. I think"
},
{
"end_time": 3058.746,
"index": 124,
"start_time": 3029.548,
"text": " the talk about sort of physical examination being something AI can't do, that's very much part of the current or even the sort of historic paradigm of medicine. But because it's really a proxy for what's going on inside, and what imaging is showing us another diagnostic technique. So while physical examination remains important, I think it's going to the new paradigm of medicine will actually be, you know, based on large data sets, etc, combined with imaging. And I think"
},
{
"end_time": 3082.858,
"index": 125,
"start_time": 3059.377,
"text": " It's probably in my mind, it's probably wrong to focus on the lack of his examination thinking, you know, AI won't be able to take over from what doctors do, because I think it will just become less important as other more accurate methods or sort of methodologies, you know, take over that. I was gonna say that the current way we spend that the economy of medicine is dependent on this sort of diagnostic code"
},
{
"end_time": 3099.991,
"index": 126,
"start_time": 3083.387,
"text": " infrastructure."
},
{
"end_time": 3129.394,
"index": 127,
"start_time": 3100.589,
"text": " But they can think more subtly and they can make predictions that would say with this genetic profile and this cardiovascular measures and your current lifestyle and all of that stuff that frankly human doctors are not equipped to deal with as a data stream. They can't really think those things. It will be able to and it will be able to say, you know what, you should reduce sugar a little bit and you should exercise 15 minutes more a week."
},
{
"end_time": 3151.988,
"index": 128,
"start_time": 3130.265,
"text": " Why? And maybe the insurance company wants to know why, right? And it'll say because the data says that. But it's more than that. I think that today if you look at what clinicians doing and you have some clinician here, so correct me if I'm wrong, but basically you see that the EMR and the entering a lot of information, the 15 minutes gone and that's it. The next patient is coming."
},
{
"end_time": 3177.449,
"index": 129,
"start_time": 3152.398,
"text": " I think that what will happen now or in the future is you have all the information in your computer or tablet or whatever and you have time to understand the user and also to provide the intervention that will work for him. There are some people we know that now we show that the people that are fat are basically lazy. They are sick and they have no GLP-1 for that."
},
{
"end_time": 3201.254,
"index": 130,
"start_time": 3177.807,
"text": " uh... so we i think that will they start to divide the population for more and more buckets and then maybe they i will help us to know what bucket is this patient and then the job of the clinician will be to communicate it and to help him to implement the intervention in terms of chronic disease of prevention eighty percent of chronic diseases can be prevented but how well can a i"
},
{
"end_time": 3224.77,
"index": 131,
"start_time": 3201.681,
"text": " predict and and model i guess the dynamic interactions happening with multi i guess factorial chronic disease like for example someone who has diabetes who may develop cardiovascular disease chronic kidney disease how well can ai differentiate between the biomarkers from one particular you know disease versus"
},
{
"end_time": 3249.65,
"index": 132,
"start_time": 3225.077,
"text": " The other like looking at all together instead of just one thing at once for the multi-agent. Yeah, I think that Let's start with what happening right now. Our health care system is basically sicker The clinician is the one a treat you or won't look at you or kick you out of the office if you're not sick Okay, and we need to move to a prevention prevention system basically starting to looking at"
},
{
"end_time": 3275.657,
"index": 133,
"start_time": 3250.094,
"text": " I went to my clinician and told her that my APOB is BTI. She told me, what is APOB? I'm in Boston, so it's not like, I don't know, in Alabama. So we have a problem with a clinician that were trained or learned like 30, 40 years ago and they know what they know. They are very busy. They are poor people. I really don't want to replace any clinician."
},
{
"end_time": 3297.978,
"index": 134,
"start_time": 3275.657,
"text": " Working hard and they're doing their best, but they don't have time to go to PubMed and read a new paper. So I think that what the AI can do is to send them a summary or even look at you when you are coming to the practice and immediately send the AI is great in summary data. Look at all the data available of all the stories that you have."
},
{
"end_time": 3320.111,
"index": 135,
"start_time": 3297.978,
"text": " And all the medical data that they're available and give him one sentence one a power graph that he can read to you and basically explain to you where you are and then provide you some intervention that you can do. Industry and employment for the younger generation is going to be totally skewed and flipped as time progresses so what can we the younger generation do to prepare for all this."
},
{
"end_time": 3334.77,
"index": 136,
"start_time": 3320.862,
"text": " Yeah, fantastic question. Historically, the answer was to specialize and to find a particular niche and go into that, whether it's in medicine or in general, to just be a world expert in a very, very narrow thing."
},
{
"end_time": 3351.715,
"index": 137,
"start_time": 3335.316,
"text": " That I don't think is going to be competitive anymore compared to having a broad landscape view of what's actually happening. So I would imagine I would I would encourage you to run into these tools as fast as you can try them the thing we're talking about earlier with vibe coding the ability to create software."
},
{
"end_time": 3370.06,
"index": 138,
"start_time": 3351.715,
"text": " Is it gonna explode right now we don't realize it but software is extraordinarily expensive. And there's lots of apps and tools and things that we would love to have as an individual companies in your practice. And you have to rely on the software industry to create them because a single phone app you'll cost half a million box to get out the door."
},
{
"end_time": 3392.005,
"index": 139,
"start_time": 3370.06,
"text": " That's gonna completely change so i don't think we're gonna need less software developers i think we're gonna need more software developers than ever before and it's gonna change i have a stack of punch cards. In my office cuz they were here at fau we had an early IBM computer here. At the time at one point that's what it meant to be a computer programmer you would literally down in the ones and zeros and now we can think in english."
},
{
"end_time": 3417.927,
"index": 140,
"start_time": 3392.005,
"text": " The dream of computer science from the nineteen fifties that we just talk in plain language and we get software out of it and i think this in some sense is the killer app of l l m's is that they can write code they can create working software and historically that took teams of really trained talented people to do and now we're getting this kind of literacy. This is real quick if we go back to the middle ages there was no word there was no concept of being illiterate."
},
{
"end_time": 3438.49,
"index": 141,
"start_time": 3418.558,
"text": " There was no expectation that most people would be illiterate. The idea was that you had this professional class of scribe and that they took care of that for you. Nowadays, we don't have a word for ill health or it because there's a specialized class of physicians and it's your job to tell me what it is. And I don't bother with that. I don't think we're going to be as comfortable outsourcing that."
},
{
"end_time": 3467.978,
"index": 142,
"start_time": 3438.626,
"text": " We're going to take responsibility for that medicine, and we're going to need the AI tools to do that. So we're going to be able to create these custom apps, and we're going to be able to create this customized, personalized medicine for everybody. Yeah, a couple of comments about that. I think that's a very good question. So as the founder of a company, I can tell you that you have a lot of prioritization of what the team will do, and you're maybe doing maybe 1%, maybe even less than what you want to do."
},
{
"end_time": 3495.026,
"index": 143,
"start_time": 3468.456,
"text": " So I completely agree with Will. We'll need more software developers. What the software developer will be, they will be much more efficient and will do more. And for example, if you are testing a model and we're trying to find what is the best model for a specific question, instead of looking at two different ways, we'll take a look at 20 different ways and we'll do it in half time. Now, another analogy is to"
},
{
"end_time": 3522.773,
"index": 144,
"start_time": 3495.128,
"text": " Compare it to the industrial revolution that happened, I don't know, 100 years ago. So then a lot of farmers basically lost their job because one combine can cover, I know, 1,000 people. But if you think about the code, the code is not limited. The land in the house is limited. The code is not limited. So basically, in my opinion, we'll need more coders and basically everyone here, even the biologists and I know everyone can be a coder right now."
},
{
"end_time": 3550.708,
"index": 145,
"start_time": 3523.114,
"text": " I have two quick comments. Sorry, I have to interject, but just"
},
{
"end_time": 3571.101,
"index": 146,
"start_time": 3551.493,
"text": " About the software engineering thing, I would not do a degree in computer science just because computer science is mostly going to teach you about theory of algorithms and all this stuff that's not very useful in my opinion. Unless you're really interested in that and you want to try to be a professor, I wouldn't do that."
},
{
"end_time": 3599.087,
"index": 147,
"start_time": 3571.323,
"text": " And I think a lot of software engineering jobs will be replaced. So I would definitely consider a different career. But I wanted to mention this thing about EQ. People are mentioning AI is way better emotional intelligence. I think the study people were referring to was done specifically with looking at responding to messages in a patient portal."
},
{
"end_time": 3621.135,
"index": 148,
"start_time": 3599.565,
"text": " And, you know, because the doctors are really overworked, they tend to give very terse responses to those to those messages, or as the AI can give, you know, gives more nuanced sort of empathetic responses. So in that context, yes, the AI is, is much more pleasant to talk to more aware of emotions, but"
},
{
"end_time": 3643.575,
"index": 149,
"start_time": 3621.51,
"text": " I don't think"
},
{
"end_time": 3666.135,
"index": 150,
"start_time": 3644.497,
"text": " Applying more health care resources. People seem to consume health care resources at a much higher rate than you would expect. There's diminishing returns. They showed people are doubling the amount of health care you utilize doesn't improve your outcomes."
},
{
"end_time": 3692.312,
"index": 151,
"start_time": 3666.596,
"text": " So there's this puzzle like, why are people consuming so much health care? And he argues it's because people have this emotional need to feel like they're being cared for. So really what they're doing is they're fulfilling that emotional need. I don't know if AI is, maybe people will get that from AI, but. I want to ask a question. Recently, health plans and large entities such as pharmaceutical companies, et cetera,"
},
{
"end_time": 3715.23,
"index": 152,
"start_time": 3692.739,
"text": " And also very large shop practices which are owned by hospitals et cetera are putting in their contracts provisions that prevent the distribution of patient data. In other words who owns the data so this evening is built around data the issue is not that so much cuz date is getting consumed."
},
{
"end_time": 3735.93,
"index": 153,
"start_time": 3715.623,
"text": " It's who owns that data and is it going to be released? Is it primary data? Is it secondary data? Is it peer-reviewed data? Under HIPAA, patients do have a right to get their data, but what I observed when I was at Mass General Brigham and other hospitals is"
},
{
"end_time": 3760.947,
"index": 154,
"start_time": 3736.596,
"text": " It can be very challenging, especially with the radiology images. If anyone has tried to get one of the radiology images, it can take a long time and they have to give you like a CD-ROM or a DVD. It's very challenging. And I actually think the hospitals are making it even harder because they're realizing the value of their data. And they're also more reluctant to share data with researchers, I think because they're realizing that"
},
{
"end_time": 3785.862,
"index": 155,
"start_time": 3761.288,
"text": " You know, the data has enormous value. I think that big, big companies like Microsoft and Google are working with hospitals to have agreements where they can basically bring in an LM or foundation model and train it on all of the data. So I think that's how it will be done. But"
},
{
"end_time": 3796.766,
"index": 156,
"start_time": 3786.852,
"text": " If a patient wants to get all their data and upload it to some sort of cloud service like ChatTPT, unfortunately, I think it's way harder than it should be."
},
{
"end_time": 3824.684,
"index": 157,
"start_time": 3797.142,
"text": " Just real quick, while I think it's amazing, you know, going back and looking at these data sets, it's essentially kind of saying you've got all these great leftovers in the fridge. We collected this data for some other purpose and let's go harvest and mine it. And that's fantastic. But I think now that we understand the value of these tools, we're going to be collecting data at a scale that we've never seen before. Whereas before we took a few data points. Now, one of the things I've been thinking about is this sort of the Star Trek replicator, or not the replicator, the tricorder."
},
{
"end_time": 3845.299,
"index": 158,
"start_time": 3824.684,
"text": " You swing this smartphone-style device in front of someone and it's capturing all kinds of data. I think we need to be thinking about how do we harvest the value of the data we've already spent the money to collect, but maybe more importantly, how do we revolutionize the healthcare system so that it's actually capturing the data, structuring it, and putting it in a way that will help... I think we've got an important data collection question right over here."
},
{
"end_time": 3867.176,
"index": 159,
"start_time": 3846.323,
"text": " Hi everyone, hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my sub stack. Subscribers get first access to new episodes, new posts as well, behind the scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers."
},
{
"end_time": 3896.357,
"index": 160,
"start_time": 3867.176,
"text": " Whenever someone, for example, gets sick and he mentioned who owns the data, there's already massive amounts of"
},
{
"end_time": 3908.353,
"index": 161,
"start_time": 3896.852,
"text": " For example, CAT scans available for people who may have passed from certain illnesses or survived and their remission as they progress through the disease."
},
{
"end_time": 3938.063,
"index": 162,
"start_time": 3908.66,
"text": " So what is that data doing right now? Is it just sitting dormant somewhere and it'd be actually utilized? Well, like with the medical imaging data, the hospital does own it in the sense that they can use it for commercial purposes and they can use it for research. They can use it for a lot of things. Like I said before, there are big companies like Google, Microsoft that are working out contracts with hospitals to essentially train GPT-4"
},
{
"end_time": 3966.305,
"index": 163,
"start_time": 3938.695,
"text": " on all of the radiology images and all of the text reports. And I think if you train something like GPT-4 on all of the images in a large healthcare system and all of the reports, I actually think you probably have something at the level of a radiologist. I saw this South Korean company named Vuno who's very adept in like AI and radiology. So I wanted to know what are roadblocks here in the US that's stopping us from doing the same thing."
},
{
"end_time": 3992.176,
"index": 164,
"start_time": 3966.852,
"text": " The FDA is one of the strictest regulatory bodies in the world. It's almost more exciting to think about some of the emerging countries where things can be deployed more readily and there's actually more need. I'm not surprised that you're seeing AI in other countries just because of the FDA."
},
{
"end_time": 4017.722,
"index": 165,
"start_time": 3992.927,
"text": " to the physician's point of social determinants of health. It actually contributes 30% to a patient's health outcomes, 30%. And a person can't change the zip code overnight where he stays. And it says that where the patient lives, zip codes determine how long the patient will live to some extent. It is unfortunate, but it is the truth. From that perspective,"
},
{
"end_time": 4041.527,
"index": 166,
"start_time": 4017.722,
"text": " The stress of living in that particular zip code affects the person at a molecular level and a cellular level. So what are we going to do about those things? Move to the right zip code. At least, you know, has anyone here tried to get a therapist? It's very hard. My point at the beginning of this was that doctors"
},
{
"end_time": 4067.432,
"index": 167,
"start_time": 4042.312,
"text": " Um, they're not trained. They're only trained in, uh, anatomy, physiology and the biological component. They're not very good at handling, uh, the, how to change behavior and psychological stuff, but it seems AI can, um, can provide a sort of counseling. I mean, behavior, we do know how to change."
},
{
"end_time": 4095.828,
"index": 168,
"start_time": 4068.558,
"text": " change behavior. That's one of the things in cognitive behavioral therapy, right? It just takes a lot of time talking. And like most people I know that are sick, like they do want to get better. Like people with chronic fatigue, like I was talking about the two to 4% of people with chronic fatigue, they desperately want to get better. It's just very, if they go to it, when they go to the traditional healthcare system, the doctors have no idea how to treat it because they have such a complex condition."
},
{
"end_time": 4123.933,
"index": 169,
"start_time": 4096.681,
"text": " Okay, let's hear from Dan van Zandt. Yeah. So I think we have sort of two perspectives on AI. We have the skeptical perspective of like, oh, well, AI won't solve this problem or it won't solve that problem. We have the hype perspective of this is going to change everything. But I think practically my view is it's not going to solve all the problems in medicine and society, but it's going to make incremental progress on some of those. I'm curious where each of the speakers in the panel see as the best target in the immediate future. I'm talking the next three months."
},
{
"end_time": 4151.937,
"index": 170,
"start_time": 4124.497,
"text": " Maybe if you know, a company was sufficiently motivated where AI could make incremental progress on a specific problem in medicine, not solving all of the issues with society and zip codes and everything else, but it's very specific thing that you see AI as being really helpful towards. Okay, Elon, I kind of want to split my answers. But I think I think this sort of AI first, maybe, I don't know if this is gonna happen in the next three months, but we're"
},
{
"end_time": 4179.462,
"index": 171,
"start_time": 4152.21,
"text": " People are able to intelligently sort of pre-diagnose and come in, decide, make their preliminary medical decisions like, should I go to the doctor or not, in a more informed and intelligent way, potentially. I don't see which industry and which company is harnessing that exactly. So it's hard to see the profit motive, which means it doesn't ever happen, right? I guess OpenAI themselves can see this as a utility."
},
{
"end_time": 4204.36,
"index": 172,
"start_time": 4179.889,
"text": " and then they can sell that as a utility. But I do think that there is a very strong possibility in the very near future, I'm hopeful, that we're going to just see straight up AI breakthroughs in terms of actual hypotheses. You might know a little bit about this. The deeper search and these other engines, I haven't gotten access to it yet, but Coscientist, which is a Google product,"
},
{
"end_time": 4216.049,
"index": 173,
"start_time": 4204.787,
"text": " is"
},
{
"end_time": 4245.725,
"index": 174,
"start_time": 4216.596,
"text": " We talked about the data and data ownership. It's very possible that companies that are actually sitting on silos of data, they don't want to share them. They need the profit motive, but they can then use these tools maybe to actually develop novel drug interventions and things like that. So I think that soon we'll probably will see some actual effective practice of science using these. OK, now the three months, the next three months in AI, what's a specific problem that can be solved? Don't paint the whole"
},
{
"end_time": 4271.425,
"index": 175,
"start_time": 4246.237,
"text": " Picture of the world changing in 35 years. What's the next three months looking like? Well, I think it's just the ability for everybody to get their own physician, essentially, this kind of concierge medicine that only the very wealthy can afford. To some sense, we look at how much health care costs, you know, at $6 trillion price tags as a culture, we can't afford it. We have to change something about that. And so I think the opportunity is"
},
{
"end_time": 4285.947,
"index": 176,
"start_time": 4272.022,
"text": " to"
},
{
"end_time": 4315.35,
"index": 177,
"start_time": 4286.34,
"text": " Maybe not even the current version, but the promise of these systems is not just that they will know internal medicine and cardiology, but they will be therapists. They will become your friend. They will know how to talk to you. They'll be your dietician. They'll be your gym coach and this whole thing. And so they're going to be talking to you about your blood pressure, but talking about it in the context of, hey, you know, you could save some money this week on your grocery bill if we make these recipes. It will be so kind of holistic because it's polymath. It's not just a physician. It's all of these things at once. And I think that's"
},
{
"end_time": 4343.166,
"index": 178,
"start_time": 4315.811,
"text": " That's something you can do today. You can go into the prompt and say, can you please simulate being the following 10 things and talk to me what I should do today? Yeah, I will start with a joke and then I will. So a joke about AI. So a CEO of a company wrote a few bullet points that explain the vision of the company. And then one of the executives said, no, no, that's too short. Let's please extend it. So using AI and wrote 10 pages."
},
{
"end_time": 4369.241,
"index": 179,
"start_time": 4343.558,
"text": " And then one of the employees received the 10 pages and they asked the AI okay summarize it to five bullet points and you got the same. So I think that that's the power of AI. Basically take a big data and summarize it and take it to the point. So I think that we can and maybe in the next three months and it's available today to take all the PubMed and all the information and come to a patient with five bullet points. That's what you need to do today."
},
{
"end_time": 4398.968,
"index": 180,
"start_time": 4369.65,
"text": " Yeah and just a similar thing is kind of what I was talking about before is the AI could interview the patient before they see the doctor and then summarize some of the most important points instead of just doing an intake form that you know is kind of very grand is very like the data is very compressed into like a five point scale like how's your sleep zero to five"
},
{
"end_time": 4427.688,
"index": 181,
"start_time": 4399.735,
"text": " The AI can spend more time exploring like that and then capture the salient points and review all the medical history as well. So yeah, I think chart summarization is a very low-hanging fruit application we're already seeing. I didn't want to say something. So this is a medicine and AI event. We have lots of different types of people here. They all have incredible amounts of skills and set of errors. But I think something important to sum up for today is we want to talk to all the physicians in the audience, people like Dr. Dalvi."
},
{
"end_time": 4454.343,
"index": 182,
"start_time": 4428.063,
"text": " who spent years of their life, arguably the last years of their youth, giving up a part of themselves so that people like us could live healthier today and even have this position. Some of us may even reach centenarian status in this room, or if we hit longevity, escape velocity, hopefully more. But I want to take a little bit of time today to all of the physicians in the audience, the people that gave up their lives to save ours. What do you think about AI? How would you use AI? Are you against it? I'll give it to Dr. Dalby."
},
{
"end_time": 4472.415,
"index": 183,
"start_time": 4454.599,
"text": " Thank you firstly for inviting me and thanks to the panel for a phenomenal discussion. I've used chat GPT in a practical sense. For example, one of the problems with neurology, it's one of the high burnout professions because there's so much documentation involved."
},
{
"end_time": 4493.763,
"index": 184,
"start_time": 4472.773,
"text": " I've created custom GPTs where I will type in a paragraph worth of what I've seen in the clinic and talk to the patient about and then it will convert it into a level five note that is billable and I get my money for it. And the advantage is I don't take the computer into the patient's room. I don't use EMR. I have"
},
{
"end_time": 4516.766,
"index": 185,
"start_time": 4493.763,
"text": " contact with the patients so they are very happy. I'm very happy and my hospital billing system is happy. So that's a practical use. It's also a phenomenal research tool. So I have used it, you know, if I'm writing a paper instead of reviewing PubMed article after article, I'll use it to pull out the five seven that are really relevant to me and get started."
},
{
"end_time": 4545.333,
"index": 186,
"start_time": 4517.039,
"text": " it's also for me personally it's a good teacher i've become suddenly in my old age interested in philosophy so i asked claude ai's to imagine where dostoevsky and tolstoy and tergenev my favorite russian authors are discussing the myth of sisyphus and it came up with a little short story that explained it to me like i have no professor would have so phenomenal uses practical as well as intellectual awesome thank you dr dolby"
},
{
"end_time": 4574.718,
"index": 187,
"start_time": 4545.589,
"text": " Well, I love AI. I think it's wonderful. I think it's definitely going to change a lot of things. I use it nearly every day. As a matter of fact, working on the gentleman that was here with me tonight, working on a new company that's going to be using AI in an area of healthcare that has not been addressed at all, which is the long-term care arena. We have an aging population that has not been addressed very well."
},
{
"end_time": 4604.394,
"index": 188,
"start_time": 4575.23,
"text": " I think it's wonderful. I commend all of you guys for doing it. Please keep it up."
},
{
"end_time": 4633.114,
"index": 189,
"start_time": 4604.633,
"text": " I'm not afraid of losing my job and I would tell anybody a hundred years from now they're not going to lose their job because you need still the human mind, you need the interaction, you need the touch, you need everything else that's going to go along with it. Unless you're going to have a Mr. Data from Star Trek, it's going to be hard to eliminate that. I appreciate all you guys. By the way, what would you say, we actually have an MD candidate here who came all the way from Boston just for this event."
},
{
"end_time": 4660.674,
"index": 190,
"start_time": 4633.541,
"text": " What would you say to people right now who are the next generations of physicians, the next generation? Well, I mean, I don't know. I was just talking to one of my colleagues, the gentleman that had the other neurologist there. And we just see medical education is going in a different direction. It's all about now passing a test. OK. And, you know, I have other friends who are teaching it right here at the at the university."
},
{
"end_time": 4690.299,
"index": 191,
"start_time": 4661.049,
"text": " at the medical school and have complained about the students are not focusing on the clinical aspect of it, the oscillation, the percussion, all of the different things that we did. And that's what you need to do, because that's going to make you a good doctor. It's going to make you a good clinician, good diagnostician. OK, passing a test is one thing. Yeah, that's great. OK, and you will. You will. I guarantee you will. But you've got to go the distance, right? Because that's a person, right? That's a person."
},
{
"end_time": 4713.387,
"index": 192,
"start_time": 4690.691,
"text": " That's a person all uniquely different and you have to treat them as such. Future doctor right here. I'm a medical student in Boston. I go to Boston University School of Medicine in their accelerated BAMD program. In terms of AI, I'm definitely interested in terms of the way it influences our medical education right now because a lot of students are using AI to teach them the material."
},
{
"end_time": 4732.91,
"index": 193,
"start_time": 4713.66,
"text": " Because it can teach them better than some of their professors, unfortunately, and also just due to the amount of time constraint, like you're saying, passing exams is very hard. So that's just another area to think about in terms of clinical AIs and terms of medical education. What are your concerns with AI? AI in medicine? In the clinic or for patients at home?"
},
{
"end_time": 4754.462,
"index": 194,
"start_time": 4733.114,
"text": " I think it's important to think about when you're stuck on the phone with one of those robotic voices, and they're not listening to you and you're trying really hard to explain something. AI has come a long way, it is very good at reaching states that are close to those which I would say are most intimate with our phenomenology."
},
{
"end_time": 4784.104,
"index": 195,
"start_time": 4754.718,
"text": " I would say I'm very bullish on AI from two perspectives really."
},
{
"end_time": 4813.148,
"index": 196,
"start_time": 4784.497,
"text": " One is offering really the promise of universal health care, which currently isn't available to many people in this country or around the world because of cost and access and availability. And I think to have, as one of the panel members mentioned, to bring that cost of consultation down to near zero is going to be an incredible thing. And many people suffer because they don't have access to pretty basic diagnosis and treatment. And I think that's an incredible opportunity."
},
{
"end_time": 4841.169,
"index": 197,
"start_time": 4813.558,
"text": " I think the other great opportunity is just using these very large data sets, getting more insight into complex diseases and, you know, drug development and those sorts of things. And also longevity, you know, this sort of emerging science of the, you know, both lifestyle intervention and, you know, drug interventions for longevity. And I think a lot of these are very complex problems that AI is very well suited to tackling. Even just plain trial and error. Do you think"
},
{
"end_time": 4867.619,
"index": 198,
"start_time": 4841.561,
"text": " that we could use AI to solve not only just unusual medical diseases, but come up with very unusual solutions to them that we never thought about before. Well, I think that's absolutely right. I think one sort of anecdote I'd give you is I remember as a medical student, obviously, you know, and you'll know this, you're sort of taught how to take a history and examine it. It's a very, it's a very sort of structured approach to getting information from the patient both"
},
{
"end_time": 4890.947,
"index": 199,
"start_time": 4867.961,
"text": " reported information physical information and i remember my in england we call them consultants but attending doctors here and they would basically say you know they would hear the first one or two lines of that story and they would immediately jump to the answer and that's what we call experience but that's also what ai will be brilliant at because when it's had millions and millions and millions of trainings"
},
{
"end_time": 4919.65,
"index": 200,
"start_time": 4890.947,
"text": " Thank you. Why don't we have a theory of everything for medicine or biology? This is something that's been chased after forever in mathematics and physics. Why isn't it there in biology yet? And by having something like that, but that also let us not have to die in the way that we do anymore, lose the people we care about."
},
{
"end_time": 4947.568,
"index": 201,
"start_time": 4920.179,
"text": " Yeah, I just want to real quick. I want to thank Kurt for traveling. Always makes the discussion quite interesting. And I want to thank Addie. You did a tremendous amount of work organizing our speaker. I've received several messages, emails and comments from professors saying that they recommend theories of everything to their students. And that's fantastic. If you're a professor or lecturer and there's a particular standout episode that your students can benefit from, please do share. And as always, feel free to contact me."
},
{
"end_time": 4956.203,
"index": 202,
"start_time": 4947.995,
"text": " new update started a sub stack writings on there are currently about language and ill-defined concepts as well as some other mathematical details"
},
{
"end_time": 4984.633,
"index": 203,
"start_time": 4956.425,
"text": " Much more being written there. This is content that isn't anywhere else. It's not on theories of everything. It's not on Patreon. Also full transcripts will be placed there at some point in the future. Several people ask me, Hey Kurt, you've spoken to so many people in the fields of theoretical physics, philosophy and consciousness. What are your thoughts? While I remain impartial in interviews, this sub stack is a way to peer into my present deliberations on these topics. Also,"
},
{
"end_time": 5010.708,
"index": 204,
"start_time": 4984.77,
"text": " Thank you to our partner, The Economist. Firstly, thank you for watching, thank you for listening. If you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself, plus it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm,"
},
{
"end_time": 5036.732,
"index": 205,
"start_time": 5010.708,
"text": " Which means that whenever you share on Twitter, say on Facebook or even on Reddit, et cetera, it shows YouTube. Hey, people are talking about this content outside of YouTube, which in turn greatly aids the distribution on YouTube. Thirdly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts."
},
{
"end_time": 5056.63,
"index": 206,
"start_time": 5036.732,
"text": " I also read in the comments"
},
{
"end_time": 5082.961,
"index": 207,
"start_time": 5056.63,
"text": " and donating with whatever you like there's also paypal there's also crypto there's also just joining on youtube again keep in mind it's support from the sponsors and you that allow me to work on toe full time you also get early access to ad free episodes whether it's audio or video it's audio in the case of patreon video in the case of youtube for instance this episode that you're listening to right now was released a few days earlier every dollar helps far more than you think"
},
{
"end_time": 5086.732,
"index": 208,
"start_time": 5082.961,
"text": " Either way, your viewership is generosity enough. Thank you so much."
}
]
}
No transcript available.