Episode Transcript
[00:00:07] Hello. Welcome back to another episode of the Code 321 podcast. Today's episode is going to be for the nerds out there, all the techies. We're going to talk a little bit about artificial intelligence. As a disclaimer, I am not a coder. I'm not an artificial intelligence specialist. I don't work for OpenAI or Google, obviously. So I'm going to go over my understanding of AI and talk a little bit about how this has some implications to my job in medicine. For those of you that are very big fans of AI, you may know more. And if you have more information for me, definitely reach out. You can get a hold of me using the normal contact information and finding me on our website, precisiontrainingusa.com dot. So let's talk a little bit about AI. So AI is artificial intelligence, and this is software that is designed to look at vast amounts of data and do something. And I say do something because there's a couple different types of AI. When we, when we use that broad term, we're actually referring to one of four different types, two of which exist currently, and then two more that are theoretical. And as far as I'm aware, as a member of the general public, don't currently exist. So the first type of AI that we're, that was originally designed is called reactive AI. And so reactive AI doesn't have any learning associated with the actual software system itself, but it does reason, meaning that it has an idea of what patterns to look for with large amounts of data. And when it sees that pattern, it's going to be able to recognize that and draw a conclusion, whereas a regular computer won't make that leap of drawing a conclusion off of data. It will only respond to what we input. So if you were just taking a regular computer, the computer is not going to type anything for you. It's only going to record what you type into it. Whereas a reactive AI, if that data set is known to the software program, when you start typing, it will complete the rest of that function because it knows where you're going with the input that you've put into the computer system. That being said, it won't grow or develop at all. It's going to remain that same system. And whatever data you put in there, whatever patterns it recognizes, that's going to be it. The second level of AI, which is what OpenAI and Google and Gemini and all these other big software companies that you're familiar with, are currently working with, which is limited memory. Limited memory AI basically refers to a software system that can take large amounts of data, and we're talking huge, huge, huge amounts of data. Like, for example, every single word that was ever typed on the Internet since its inception. That would be example of some of the types of data that they're inputting into these systems to try to educate the system in a way that it's able to draw conclusions. So limited memory basically means that it's going to not only recognize patterns and it's going to grow from them, but it's also going to learn new patterns on its own without any input from humans. So we're going to feed it a lot of information, and it's going to look at all of this data, and it's going to come to a conclusion based on examples that it can find in these vast, vast, vast amounts of data.
[00:03:40] The third type, which hasn't been released formally yet, that we know of, is called theory of mind. Theory of mind AI basically relates to its capacity, the software's capacity, to act more like a human. So if you ever heard of, like the Turing test, that's a classic example. And with theory of mind, what it's looking at is, does the computer or the software system recognize human emotion? Can it respond in a way that is similar to what a human would respond to in the same stimulus and have the capacity for empathy? Would it understand social situations, for example, rather than just drawing from historical data and crunching numbers like a traditional computer, would this computer be able to recognize an emotion and then respond in a complementary emotion, just like another human would? Which is kind of freaky to think about, that these computers can think and feel like a human being. But that would be that next theory of mine is that emotional and that social component, the last one, the fourth type of AI, the most advanced, which does not currently exist, but I'm sure is in progress, is really what I think a lot of people think of when we talk about artificial intelligence. This is like Skynet from Terminator two. This is the west world, if you've ever watched that show on HBO, where you have a self aware. So the computer system and software understands that it is a computer system and software. It has full conception of its place in society and the world. And it is essentially a human being without the biological components and the physical structure of what a human has. So it has empathy, it has emotion, it has reasoning, decision making. It is just as intelligent as a human, if likely a lot more intelligent than a human. But it also recognizes that it is a piece of software and it's not actually human. So that's pretty terrifying. And I think a lot of people think that that is what we're talking about when we talk about Google and OpenAI and the Elon Musk companies that are out there creating this AI. But in reality, currently, where we are right now is really that limited memory stage. So the capacity to process huge amounts of information and then grow and improve, retrospectively looking at the information making decisions, and then improving on its own process of making decisions. But we're not seeing emotions or empathy or social interactions or any sort of self awareness that I'm aware of on AI.
[00:06:20] So as far as how they create these AI systems, one of the contested problems that they're running into with artificial intelligence development is that it requires so much data to create an artificial intelligence program and software, because in order for the artificial intelligence to grow and develop and get better, it needs to process massive, massive amounts of data. So think of, like, LeBron James or Kobe Bryant getting really good at file shots. You can't just take ten file shots and then all of a sudden be really good at it. These folks are, have been playing basketball since they were little kids, and they've, they've shot, you know, thousands and thousands and thousands of file shots to get where they are today. And the same is true with these AI algorithms. They need lots of practice, and they need a lot of data in order to understand what to do next and improve on itself. So one of the fundamental problems we run into with AI software and development is the AI software is only as powerful as the data that we're inputting. So the more data that's available to these systems, the more accurate and efficient and faster and more powerful these systems can become.
[00:07:35] So they actually ran into a situation when they were developing AI. This was kind of a contested hot button issue between OpenAI, Google and some other software companies is that they couldn't get enough information on the Internet to grow the AI at the rate that they were looking for, to the strength that they needed. So what they did is they actually went on YouTube, and they took all of the videos on YouTube, and they transcribed them into words, and then they fed those words into the computer system, into the AI software, in order to give it more data to process and grow. They did the same thing with Reddit. So if you're familiar with, with those types of platforms, basically what they're doing is they're trying to come up with ways to get massive amounts of data and then feed it into these systems in order for them to learn as much as possible. The thing is, with YouTube, YouTube has specific terms and conditions that say that you're not allowed to use data mining software or use the content that creators put up for this purpose. There's kind of a little bit of a contested issue. It sounds like most of these tech companies are doing things like this in order to get enough information to grow the systems at the rate they need to be competitive. But it's worth thinking about as we start talking about artificial intelligence and medicine, is that, remember, the fundamental purpose of AI is to crunch data.
[00:09:01] And the data crunching is only as good as the original data set we put into it. Just like any scientific equation or any mathematic equation, the raw data, the more accurate and true to the problem that that data is, the better your conclusion is going to be. And so I think as we're talking about this in medicine, you got to remember that we need to think about where is this data coming from? One of the big concerns with AI in medicine is that there's been a large disproportionate discrimination against people of minorities in the healthcare system. And if we don't have enough data that is accurate to that population, then the output of data software in large hospital network systems may not be as accurate as we need it to be. So just thinking about some of these fundamental problems we have with collecting data and inputting data into AI is only going to be compounded as the AI software is released and starts actually making decisions with this. So let's talk just a little bit about how these systems actually learn. So the systems, when they're getting these large amounts of data input, and we're talking, like I said, all the words on the Internet, all the videos transcribed from YouTube, everywhere that's ever been written on Reddit, you know, every article that's ever been produced, every podcast, it will listen to all these things and transcribe it and put the data in a word format into these systems. The first type would be supervised learning. So supervised learning is like, imagine teaching your elementary school kid a math equation.
[00:10:37] You know the answer to that equation because you have been trained on this. So what you're going to do is you're going to give the child some problem sets, and you're going to determine if that child's able to come up with the right answer. So if you think of kind of this question and answer system that would be supervised learning where they're giving data into the AI software, into the computer system, knowing what the answer that they want is, and then hoping that that computer system is going to come up with that answer, and then every time you do that, that computer is going to remember that specific problem, that specific data set that you put into it, and then it's going to come to the same conclusion every time. The next level would be unsupervised, which means we would just allow the computer system and software to comb through large amounts of data. And it's looking for patterns. So it's trying to determine cause and effect, and it's trying to determine correlations. So we're not necessarily telling it which equations to use and which things are correlating to each other, but what we are doing is we're allowing it access to this large amount of data and it is looking for patterns on its own. And then we want to make sure that those are accurate. Reinforcement is the most powerful version of learning. So reinforcement learning is basically it's going to have access to all the data, it's going to look for patterns, and then it's going to choose the most efficient, best pattern out of all of the sets that you gave it. And so this is, this is where a lot of these companies are making a lot of their money, is by allowing for this reinforcement learning, to try to teach the computer and allow the computer to teach itself how to become more efficient, how to be as fast as possible, and how to be as accurate as possible, not only just finding all the patterns, but can it find all the patterns and then find the pattern that is the best out of all the patterns that it looks at? So let's talk a little bit about how these systems can be applied. So, remember, AI software is primarily used for processing massive amounts of data. So you can think of, you know, the original creation of the computer system. Gymnasium sized rooms filled with large amounts of machinery in order to calculate one small mathematical problem back in, you know, the early 20th century, nowadays we can do pretty much everything right on our cell phones, you know, so, so we've progressed a lot in the ability to process, process data, but it still requires the human being to tell the computer what you want it to do. If we want to look for something online, we need to type in what we're looking for. And so the AI component is supposed to be removing the human reasoning. Part of that to allow for the computer system to determine what needs to happen and execute that decision without any human input. Some of the main ways that we are seeing this being implemented in the healthcare system is to take data from, let's say, like a primary care appointment, crunch that data down, and then be able to create an electronic healthcare record of that visit. So they want to transcribe some notes. The big argument here is that when physicians or pas or MP's or any other provider is sitting there working with a patient, they are doing the face to face communication about, you know, listening to what the patient is complaining about, offering suggestions, prescribing medications, making decisions, collecting information. And after that visit is completed and the patient goes home, the doctor or the provider then has to go back to a computer system and type in notes about what happened. So they need to summarize the appointment, they need to give instructions to the patient about what to do next. They need to write prescriptions, they need to summarize any vital signs and give any other pertinent data that the patient and the healthcare system needs to know. And this is called the electronic healthcare record. So a lot of physicians and providers actually call this pajama time, which I thought was really entertaining. I hadn't heard that before. And the reason they say that is because after the visit's done, after they've worked a full shift, a lot of times these providers are spending multiple hours at home, you know, theoretically in their pajamas is kind of the joke, writing all these notes. So this is an additional burden that leads to more work on the provider. And these providers are really getting burnout from this, that one of the largest areas of burnout for any sort of provider in a healthcare system is the documentary piece. And I can tell you from, you know, a paramedics perspective, I definitely don't love documentation. It takes me, you know, an hour or more to do the patient care report on these patients that were taken care of because they're relatively complex. One of the ways that AI decided that they could help is if the provider chooses to record audio from the visit, from the interaction between the patient and the provider, like the doctor, then that audio can then be transcribed and placed into a electronic healthcare record. And the AI component of this, the artificial intelligence component, is it's not just going to put the transcript in there. It's going to understand what the format of those notes should be, and it's going to take the data, and then it's going to crunch that data down and put it into that format. So say there's 45 minutes of conversation about the patient's pain they're having in their back. It's not going to put 45 minutes worth of writing into paragraphs and put that in the record. It's going to summarize what happened in that conversation. In just a few lines, just like a doctor would. And so a lot of hospitals are starting to implement this because it's allowing for more time with the patients. So if the doctors don't have to spend an additional 40 minutes or an hour documenting after the visit, they can spend that time actually talking to that individual one on one. Which is more satisfying for the patient, it's more satisfying for the doctor. It also gives opportunity for medical practices to optimize their amount of patients that they're seeing on a daily basis. So if you've ever tried to get a primary care visit recently, you know that a lot of these places are really backed up, and some are taking a few months to a year to book out for physicals, you know, and now, without those documentation requirements, these physicians actually have more time to get through more patients in a day. So I think that's one way that this is being done really well. I think that makes a ton of sense. One of the things that people have raised as a concern with artificial intelligence in medicine specifically, is the idea of using it in real time to make patient care decisions that have a high liability. So, for example, there are some AI algorithms that will actually analyze chest x rays, look at ct scans, mris, that will actually make decisions and interpret those laboratory results for the provider, even in life threatening conditions like a stroke or aortic dissection. And I think we need to remember that although that is likely faster then a human being would be able to read that. If the data that the AI algorithm has been fed isn't accurate, there's a high likelihood that that could be an error. So we just need to be really careful as we start relying on these AI predictive models, that the AI model has been trained and fed the right information and has been validated in the right way, just like any other training program for any other medical professional, we need to apply the same thing to the technology. I think another thing that they're concerned about is if we start using these AI technologies to reduce the amount of time that physicians are spending doing certain things, like, for example, reading x rays or cts. If they're not doing that because they're relying on artificial intelligence, there's a concern that there'll be some level of a skill dilution where now, if they have to do that, or if the AI system is broken or fails, or there's a cyber attack, they won't have the repetition to be comfortable doing that frequently. So I think in my personal view, and this is just me talking on the podcast has nothing to do with, you know, any organization or beyond myself is it makes a lot of sense to be using AI retrospectively, to, say, summarize notes and schedule patients or, you know, think about ways that we can be more efficient, look for patterns and diagnosis and using it for studies. I don't know if it makes a ton of sense to use in real time currently, because I think we need to be really careful about where these programs are coming from. A lot of the AI systems that are on the market today are being developed in a very competitive market by for profit companies, and there's a lot of incentives to get the fastest product out as fast as possible and make claims that it's, you know, going to be really excellent because there's a ton of money available for this, especially in the medical field. And so I think that alone sets the stage for some abuse of how we develop these systems. And I would wonder, you know, who's regulating this? I don't think there's really a great system in place for, you know, there's no government supervision about exactly how this needs to be done and what should be done in what order. So I think we just need to be really careful when we start implementing these real time decision making algorithms that are artificial intelligence that have a high potential to affect a patient's clinical outcome in real time. I think, summarizing notes, it's pretty easy for the provider to just read over that before they hit print and give it to the patient. I think that makes total sense, and it seems like it's better for everyone. So if you have any feedback on this episode, feel free to reach out to me. Let me know what's going on in your medical systems with AI. I'm curious to see if anyone is using it. I haven't heard of it being invoked in any sort of EMS system or fire system yet, but I think there's a lot of room to grow in this area, and I would not be surprised at all within the next year or two if we start seeing these, these systems come into play in our daily practice. So I hope you enjoyed this episode. I know it's a little bit technical, but every once in a while I think we just need to throw something out there for the folks that are big nerds and, and love the techie stuff, because I do think that this is coming down the pike, and I do think that we're probably going to be. We're going to be seeing this at some point, so stay safe out there. I wish you all the best. And definitely wear your sunscreen. It's getting hot out there.