030 - Using AI to Recommend Personalized Medical Treatment Options with Joost Zeeuw of Pacmed

January 14, 2020 00:51:47
030 - Using AI to Recommend Personalized Medical Treatment Options with Joost Zeeuw of Pacmed
Experiencing Data with Brian T. O'Neill
030 - Using AI to Recommend Personalized Medical Treatment Options with Joost Zeeuw of Pacmed

Jan 14 2020 | 00:51:47

/

Show Notes

Joost Zeeuw is a data scientist and product owner at Pacmed, a data-driven healthcare and AI  startup in Amsterdam that combines medical expertise and machine learning to create stronger patient outcomes and improve healthcare experiences. He’s also taught a number of different subjects—like physics, chemistry, and mathematics—at Lyceo, an online education service, and Luzac College in the Netherlands. Join Brian and Joost as they discuss the role of design and user experience within the context of providing personalized medical treatments using AI. Plus:

Resources and Links

Quotes for Today's Episode

“Pacmed in that has a three-fold mission, which is, first of all, to try to make sure that every single patient gets the treatment that has proven to work for him or her based on prior data analysis. And next to that we say, ‘well, if an algorithm can learn all these awesome insights generated by thousands and thousands of doctors, then a doctor using one of those products is also very capable of learning more and more things from the lessons that are incorporated in this algorithm and this product.’ And finally, healthcare is very expensive. We are trying to maximize the efficiency and the effectiveness of that spend by making sure everybody gets a treatment that has the highest probability of working for him or her.” — Joost “Offering a data product like this is really another tool in that toolbox that allows the doctor to pierce through this insane amount of complexity that there is in giving care to a patient.” — Joost “Before designing anything, we ask ourselves this: Does it fit into the workflow of people that already have maybe one of the most demanding jobs in the world?” — Joost “There's a very big gap between what is scientifically medically interesting and what's practical in a healthcare system.” — Joost “When I talk about design here, I'm talking kind of about capital D design. So product design, user experience, looking at the whole business and the outcomes we're trying to drive, it's kind of that larger picture here.” — Brian “I don't think this is ‘normal’ for a lot of people coming from the engineering side or from the data science side to be going out and talking to customers, thinking about like how does this person do their job and how does my work fit into you know a bigger picture solution of what this person needs to do all day, and what are the health outcomes we're going for? That part of this product development process is not about data science, right? It's about the human factors piece, about how does our solution fit into this world.” — Brian “I think that the impact of bringing people out into the field—whatever that is, that could be a corporate cubicle somewhere, a hospital, outside in a farm field—usually there's a really positive thing that happens because I think people are able to connect their work with an actual human being that's going to potentially use this solution. And when we look at software all day, it's very easy to disconnect from any sense of human connection with someone else.” — Brian “If you're a product owner or even if you're more on the analytics side, but you're responsible for delivering decision support, it's really important to go get a feel for what people are doing all day.” — Brian

Transcript

Brian:    Hello everybody. Welcome back to Experiencing Data. And today I've got Joost Zeeuw on the line from Amsterdam. You're not from Amsterdam, but that's where you're calling in from today, is that right? Joost:    Yeah, that's right. Brian:    Nice. Joost:    We all moved to Amsterdam for work at some point. It's our version of the brain drain. Brian:    Excellent. Excellent. Well, I'm really happy to have you on the show here. We met briefly in the hallway after your talk at predictive analytics world in London and I enjoyed your talk there. And so just to give some context, you're the product owner of primary care and medical data scientists at Pacmed. So tell us about what Pacmed is and what does that mean? What's your role like day to day over there? Joost:    Yeah, correct. Well, first of all, maybe Pacmed, the company that I work at and we are a startup in Amsterdam, moving towards being a skill up depending on your definition of course, and we make a data driven decision support systems for in the healthcare system and we really focus on really making medical products. So really supporting and helping out physicians, doctors, nurses in-taking the best possible decision by analyzing vast amounts of data. And Pacmed in that has a three-fold mission, which is first of all try to make sure that every single patient gets the treatment that has proven to work for him or her based on prior data analysis. Joost:    And next to that we then say, well if an algorithm can learn all these awesome insights generated by thousands and thousands of doctors, then a doctor using one of those products is also very capable of learning more and more things from the lessons that are incorporated in this algorithm and this product. And finally of course healthcare is a very expensive part of the world. I think in most developed Western countries it's the biggest expenditure for government and we are trying to maximize the efficiency and the effectivity of that budget by making sure everybody gets a treatment that has the highest probability of working for him or her. Brian:    One thing I wanted to ask you already here was you mentioned if I heard this right, you talked about you know the multitude of doctors out there. Also looking at a system like this, a decision support solution like this as a way of learning about what prescriptions, what treatments may make sense. Do you think that they're going into this with the perception that they might have something to learn from this? Just as much like as it's kind of like we probably look at it like the tool or the product is subordinate to the doctor's decision making and there's that perception there, right? That this tool is kind of subordinate and as you said, decision support, which I also love that you mentioned that because that's really what it's about, right? Joost:    Yeah. Brian:    But at the same time, do you think they're coming in at looking at it like I might find a treatment that I haven't thought about or maybe I didn't realize it was as effective as apparently it has been historically. Are they coming in the door with that perception? Joost:    Well, I think that's a very difficult question. Brian:    Uh huh. Joost:    Because at the moment, I don't really know whether they would have that perception from the get go, Brian:    Mmhmm. Joost:    At some point it should be like that and say for example, the analogy that I really like to use is that back in the day at the university I did a program on the nuclear technology. So you learn all this stuff about MRI scanners and PET scanners and all this sort of radiology stuff. And basically what those things do, what those technologies do is they allow the doctor to do or see something that they themselves cannot see. Right? They allow you to look into the body, if you go to a laboratory in a hospital, they analyze basically every bodily fluid that there is, which gives you information as a doctor that you would never, ever have been able to get by yourself. And I think that offering a data product like this is really another tool in that toolbox that allows the doctor to pierce through this insane amount of complexity that there is in giving care to a patient. Joost:    So really what we want to do is, well I'm given the situation with the patients and given this let's say five different treatment options, we predict that these will be the probabilities of success for those different treatment options given these reasons. Brian:    Mmhm. Joost:    And in such a scenario it's still going to be up to the doctor of course to make a decision on this. So to really say, well I agree with this or maybe I don't for that for reasons that are launched into our expertise. And we really see these machine learning systems as an extra help and technological help to allow the doctor to see something that's a normal human being can't do, Brian:    Mmhm. Joost:    Which is seeing through a vast amount of complexity and historical data and really get the lessons from that. Brian:    Mmhm. You used some language here just a second ago, and you mentioned giving a probability for an outcome and then the reasons for that recommendation that came from the machine learning that you're doing is that correct summary so far? Joost:    Yeah. Brian:    So tell me, I feel like this is an interesting thing to juggle because I'm curious, did the design and the experience, which requires you for various reasons to want and or need or perhaps legally require you to explain why the model chose this, not chose but it thinks that there's an 82% chance that this particular remedy is right. And here's why. Was there a discussion about balancing the fact that it's only 82% accurate and it gave four different possible remedies? But this model gives us the transparency into why it recommended those four different treatment paths versus some other model that perhaps was more accurate, but it was more black box and it didn't explain why. It just said, you know wrap the arm in a cast, send patient home, five days of sleep with no explanation of why. Joost:    Yeah. Brian:    Did you have to juggle that or that end user experience that you wanted, was that a factor in the algorithm design and the model choice or it wasn't like that? Joost:    Well of course that is always a question comes to mind. However, practically we don't really experience this problem because actually the perspective that we choose from the beginning is we are going to make it a non black box. Brian:    Right. Joost:    So you're going to make a transparent system because we believe that that is going to be the best tool that we can give to doctors so that we can create a synergy between the medical expert and the technology that we offer. Also, indeed legally isn't really, I don't know, whether legally is a proper word but let's stick with legally. Brian:    Compliance or something. Joost:    Yeah. If we would take compliance and the States, you guys at the FDA and in Europe we have what you call a CE certification. Getting compliance with those regulations is way more easy when you have an insightful model. And that is actually one of the biggest factors in this consideration, that when we look at healthcare data, we generally don't have a very large amount of data. So from a person perspective, we are very lucky that not a lot of people end up in hospital. From a data perspective, that's too bad because it means that we don't have for a certain disease area, for example, millions and millions and millions of data points Brian:    Right. Joost:    That would allow us to make a very complex and deep neural networks, for example. So we haven't really gotten to the point yet where we had a medical problem, a disease area where we had to take into account, well using a black box model really increases accuracy significantly. What should we do? Brian:    Got it. Got it. So it sounds like in some ways, fortunately it wasn't a factor in other ways it makes some of the technical work a little bit more difficult because your prediction accuracy isn't going to be as high, but as I always say you know it's like, well you could get really high, but if you have a low customer adoption then does it matter that you found a really high accuracy Joost:    Yeah. Brian:    If no one's using the service because they don't trust it. So we have to look beyond the technical precision and kind of consider the factor that there's no decision support happening and is that a valid outcome if there's no decision support occuring? You know? So, and that kind of ties into your talk, right? Joost:    Yeah. Brian:    So part of the reason I wanted you on the show, is because you mentioned that kind of one of the core factors as you guys build out and Pacmed is still building out this product we should say. Brian:    So you're kind of in this, I don't know if you self identify as like really in the core startup phase or a little bit more mature than that. But I thought it was funny you had these four quadrants, you know the data scientists make the awesome model, the data engineer models in isolation are useless, the medical experts saying how will this increase the quality of care? And then the designer is talking about, how the hell can anybody use that. Joost:    Yep. Brian:    And that these things have to come together to produce a product or an outcome that's actually first of all worth. It's actually going to increase health outcomes but also is a viable commercial product that you know can function. So tell me about this design piece. Like how does this factor into how you're approaching Pacmed's solution here? Like what does it mean to factor design into data science like this? Joost:    Well, I wouldn't necessarily say that it's pure to design part. Brian:    Mmhm. Joost:    Well it is of course a part of the user interaction at the UI part. Brian:    Mmhm. Joost:    Where we have to say this model has to be used by healthcare professionals on a daily basis. And as I also mentioned in my talk briefly, all these healthcare professionals at home will have an iPhone, this and that and are used to working with basically all modern technology, which works perfectly and seamlessly. And it's like it's an extension of your arm and you will understand how it works within a second. And that's setting the bar very, very high for new software developers, Brian:    Mmhm. Joost:    Especially if there's such a complex system like machine learning that's working under the hood. Brian:    Mmhm. Joost:    So what we have to do in that is make it in such a way that it fits into the workflow of the user. I've seen multiple projects that we consider doing and then after visiting the hospital for example and just job shadowing for a day, you very quickly come to the conclusion this is not going to work here. Joost:    So it's an interesting problem to predict, but it's impossible that these doctors will take a couple of minutes to walk to a screen somewhere, punch in some information and then get a prediction on something that's just not how, for example, emergency medicine works. So our design parts really comes even before designing anything is, does it fit into the workflow of people that already have, well maybe one of the most demanding jobs in the world. And part of them really the user interaction sort of design, it has to be clear instantly what it is, what we do and what we say. Brian:    Mmhm. Joost:    And this is also ready to regulatory parts and the compliance part comes in the end. It also really has to be clear that for example, if our model says, "Well we think that this person has a say an increased probability of dying within the next couple of days." Joost:    And one of the features that predict this as the fact that, I don't know, they did a certain blood test two days ago. So medical doctors said they are all scientifically educated so you think much more in terms of causal relations. So then a very sensible conclusion is, "Okay, so if I stopped doing this blood test on the second day, I will improve the health outcomes of my patients. Brian:    Mmhm. Joost:    And so it's really important and difficult for us to really show them this is the prediction, these are the features, this is how we explain this prediction. However, that does not mean that if you would start to tinker with some of these predictors, that there's a causal relation to the health outcome that we will find. Brian:    Mmhm. Joost:    And that's really a difficult landscape to be in. Brian:    Mmhm. Can you take us back? So you just mentioned like you went out and shadowed some, you know it sounded like you, some healthcare professionals or doctors in their normal job routine. Is that correct? Joost:    Yeah, definitely. Brian:    Yeah. So right there, first of all, I think that's awesome behavior do when you're in the process of figuring out what the solution should be and that's not normal. So was this abnormal for you to do? Or like what made you think that you needed to go do that or what was the impetus for that? And I'm curious, was there one particular insight that you gleaned that really stuck with you? Like wow, I never would have factored that into anything we're doing with this product. Did you have a particular thing you can remember? Joost:    Yeah. Yeah. Most definitely more things than we have time to discuss on this show I guess. Honestly, those job shadowing days have been some of the most insanely exciting and just awesome experiences that you have being a non-doctor and being allowed to go through, for example, in emergency department for a full day and seeing what goes on there, that's just incredible. Brian:    Mmhm. Joost:    But I have two or three examples. The first one is indeed say from this emergency departments, and we were thinking there whether we could make a product that could predict if a patient comes into the emergency departments, then the nursing staff does what you call a triage. Don't know whether I'm pronouncing it correctly in English. Brian:    Yes. Uh huh. Joost:    So basically they estimate how severe or how bad is the medical situation of this patient, and therefore how long can he or she wait until he or she gets medical attention, right? If it's a swollen foot, then somebody that comes in with a heart attack has priority over that person. Brian:    Mmhm. Joost:    However, what you see in practice is that these are very difficult estimations and they don't always get it right and there is some optimization to be gained in assigning the right level of urgency to these patients coming in. And we had experience with doing such a classification from another product that we were building and we said, "Well it's very comparable. Maybe we can do that in the ER, the emergency room as well." So we went for a day of job shadowing and we realized patients come in and if it's somewhat urgent or somebody has some time on their hands, they will just go along with the process of monitoring or diagnosing or treating that patient. Joost:    And maybe in hindsight they will fill in some of the information that they got in a computer. So the data set that we would get of this is the intake of this patient, really half of that could have been gathered by after the patient has already left the emergency room and yeah, that's something that you can know upfront. Brian:    Right. Joost:    Another example, and that's way more user centered actually, is that we were looking at building a product that could help to get a better estimation of what doses of a certain medication different patients would need, Brian:    Mmhm. Joost:    Which sounds really interesting and nice. And then we talked to the people that were giving this dose, and then did some user research and they said, "Yeah, it sounds very nice that you guys would make something that would make this more accurate for us. Joost:    However, we also have to do 2,000 patients per person per day. So we don't have 10 or 20 extra seconds on these patients to fiddle around with this nice little tool that you guys are making." Brian:    Mmhm. Joost:    And that's really not the information or the insights that you get when you talk to, for example, say the medical manager of such a facility because they are not like in the trenches doing this day by day. Brian:    Right. And so what was your outcome from that? Did you kind of abandoned work on that product because you realized they weren't going to spend the time to get the insight from it or what change did you make with that information, if any? Joost:    Well, for the first example the emergency room products, we said, "Yeah, this is just not going to be possible for the very simple reason that to feed an algorithm, to get a prediction from it, you need the input data. Brian:    Right. Joost:    And if the input data is actually not present at the moment where you need the prediction, that's really the end of the line." Brian:    Uh huh. Joost:    For the other products or the second one we noticed and we learned that actually say 90% of the patients out of these 2000 that they do go on a very high pace. But there's 200 of them that are very difficult to get a proper doses for, and that might actually be the ones that need most help. Joost:    And that is something that a product like ours could help with. Brian:    Mmhm. Joost:    But that of course it does falsely change your proposition. Brian:    Mmhm. Joost:    So here you learn and you adapt by this and you change what you're expected impact would be because now you only look at 10% of the patient population for example, Brian:    Mmhm. Joost:    And then you'll have to reconsider well, is that still worth it for us but mainly also for the healthcare organization that's giving this care. Brian:    Mmhm. Joost:    So yeah, you narrow down basically. And we decided to focus on the subgroup of patients that where they do have a longer time period to treat them. Brian:    Got it. Got it. So it sounds like you agree that it's time well spent even if the answer is let's stop working on this idea because it's not feasible, for whatever reason it's not going to be used or it's not valuable or it's not going to drive the health outcome. It sounds like you feel like that was still a good use of time and resource to go do this shadowing. Is that right? Joost:    For every single product that we started working on where we did not do this, we really bashed our heads against the walls because we found these problems later, where we already have invested and spent time and energy and effort and money on making these products and then you realize, "Ah, actually this might not see the rate of adoption that we hoped it would get." Brian:    Mmhm. And it sounds like a light switch went on or someone said, "Wait a second, instead of building the product and then going out and seeing if it's going to be used and it's going to create the value that we want it to." Joost:    Yeah. Brian:    How did it change at Pacmed? Like what was the driver to say, "Wait a second, let's talk to some doctors, let's go shadow their work, and see how do we build a solution that fits naturally into the work of a healthcare provider?" Where did that come from, that drive? Joost:    I think... Well the drive really has always been there, but you also need to be able as a company to do this. Brian:    Yeah. Joost:    And so I think the two main things are, one, the number of project proposals that we received really exploded, Brian:    Mmhm. Joost:    Which meant that yeah, we had the luxury of saying, "Okay, we don't have to do every single project proposal because we need to generate the revenue, Brian:    Right. Joost:    To be able to keep our heads afloat." So at the end that gave us some, yeah, some time to really assess these problems. Brian:    Mmhm. Joost:    And the second one, and I think that's just been a lesson that we learned over time in terms of experience. The people or the organizations that would come to you as a developer with a medical problem are probably going to be academic researchers. So they're doctors who are linked to university hospitals and they have an expertise in this area. Joost:    And they say, "Well this will be interesting to have as a tool that would help in this part of the healthcare system." Brian:    Mmhm. Joost:    What we however did not realize in the beginning and what we do realize very, very well now is that these are definitely not the people that treat these patients on a day to day basis. So the fact that a university researcher or as somebody from a hospital who is an expert in cardiology is interested in building some model, but then it turns out that for example, the nurse in a primary care facility is the one that's eventually going to use this product. And he or she would say, "Yeah, that does not fit. Joost:    My workflow does not really top of mind, that's not really worth the effort here." Brian:    Mmhm. Joost:    There's a very big gap between what is scientifically medically interesting and what's practically and as a healthcare system interesting. Brian:    Sure. Sure. Joost:    So yeah, we really shifted that focus from starting a product development only when we were really, really sure that it also came from the end users who are usually not the ones that are doing the scientific research in universities on this topic. Brian:    Mmhm. And as the kind of product owner, are you the one that primarily does this or do you bring your engineers or data scientists or designers out? And by the way, when I talk about design here, I'm talking kind of about capital D design. So product design, user experience, looking at the whole business and the outcomes we're trying to drive, it's kind of that larger picture here. So, but I'm curious like literally who are the bodies that got in the car that drove to the hospital that did the shadowing. Was that all you or did you have a team or multiple sessions? Like tell me more about that. Joost:    Yeah. In most of these cases it's the product owners at our company, Brian:    Mmhm. Joost:    We have different product owners for the different types of products that we build. And luckily most of them are medical doctors themselves. So they can also rely on some experience and knowledge from their years in the clinic. But indeed, especially in the earlier days it would mainly be the product owner going there. But now these days we really also focus on getting the data scientists to tag along with the product owner as well. So I said you can have a shared vision and a shared of the idea of what's going on at the site. Brian:    Got it. So what's it like bringing in the... So, part of the reason I'm asking this question is as, I don't think this is "normal" for a lot of people coming from the engineering side or from the data science side to be going out and talking to customers, thinking about like how does this person do their job and how does my work fit into you know a bigger picture solution of what this person needs to do all day, and what are the health outcomes we're going for? That part of this product development process is not about data science, right? It's about the human factors piece, about how does our solution fit into this world. So I'm curious like what's it like when you bring a data scientist out with you into the hospital setting? Is that something where they feel like, "Wow, this really changes the way I look at my work?" Or is it like, "Oh, I kind of have to do this and I really just want to like you know work on my models in the closet by myself and like just leave me alone to work technical stuff?" Like have you seen a culture change at Pacmed or what's that dynamic like? Joost:    So that's also a bit of a difficult question. Brian:    Uh huh. Joost:    Because I would like to say, yeah, the moment we started doing this that we changed the culture, but I think that our hiring policy is very clear, which is that we of course need the best technical talents, which is scarce as probably most of the listeners will also know. Brian:    Sure. Joost:    However, we also have a very, very big outer boundary condition, which is that the people that we bring in, have this societal drive in making something that will make the world better. Brian:    Mmhm. Joost:    And we've had quite a number of very talented applicants for whom this factor wasn't present enough, and that we had to let go, which from a technical perspective is really too bad. But we are of course making things that have at least an influence on the lives of patients. Joost:    And as one part of your question being, what happens when you bring a data scientist to one of these places? It's, yeah it's really amazing. As I said, for everybody going to a hospital, that's not a medical student or a doctor, it's a wow experience. Brian:    Mmhm. Joost:    But we imagine working on the intensive care data problem for example, and you go for all this data and you look at it every single day. And then at some point you go to a hospital and you are at the intensive care and you see all these patients lying there, there's tubes and pipes and drains and wires everywhere and there's alarms going off every minute, everywhere from patients that are going into whatever medical emergency state that has a huge impact on how people see the work that they're really doing. Brian:    In your situation where we were talking about life and death situations and that's not necessarily common, but I think that the impact of bringing people out into the field, whatever that is, that could be a corporate cubicle somewhere. It could be a hospital, it could be outside in a farm field as we had on a previous episode. But the point is, usually there's a really positive that happens because I think people are able to connect their work with an actual human being that's going to potentially use this solution. And when we look at software all day, as you know, most of us here obviously working in the software space, it's very easy to kind of like disconnect from any sense of human connection with someone else. Joost:    Yeah. Brian:    And it's just like users, you know it's kind of like this field of faceless people that are out there. But when you actually know like Dr. Joost, or someone that's working at this hospital and you're picturing this person with you know someone on a you know on a gurney, you know that's just the bleeding gunshot wound or whatever the heck it is. Brian:    And you're thinking about like your iPad application, you're like, "Oh my gosh, like how does this fit in?" Like you know he's juggling all these different things. It really changes your perspective on how do I fit into that big picture of developing something that matters. Joost:    Yeah. Brian:    It's no longer your little world. I think it really can turn a light on. So it's something that I advocate for you know at least doing a part of the time. If you're touching the solution, you know your staff and you, if you're a product owner or even if you're more on the analytics side, but you're responsible for delivering decision support, it's really important to go get a feel for what people are doing all day. Brian:    Like what is their job? Joost:    Yeah. Brian:    Whether insurance or you know even if it's something not as visceral as the medical field, it's really important to have that perspective if you really want to deliver something that's not based on a lot of guessing and assumptions or falling back on the, "It's their problem." Like I don't... Well, if they don't understand what this means, I don't know, but I'm pretty sure that the math is right. You know, that perspective doesn't work. And especially in your case, in this situation. Not only does it not work, but it's not safe like there's potentially lives and health outcomes at risk, right? I mean. Joost:    Yeah, yeah. Most definitely. Brian:    Yep. Let's move on to something else. You had a funny slide in your presentation. Sometimes it works because modern Western medicine is like Donald Trump. That was really funny, you had this great comic, a little graphic illustration there. What does that mean in the context of developing decision support software? Joost:    I'm a bit worried that my answer's going to be dangerous because I presume that he listens to your show. Brian:    I have a feeling Donald Trump is not listening to this particular podcast. Joost:    I think I might agree with you there. So yeah, it's an idea that we got, well actually some time ago already and it's a bit of a difficult statement. So I'll, I'll add a bit of nuance after explaining it. So what happens in modern Western medicine, so if you look in the States or Europe for example, you are treated in all fields of medicine according to a certain guidelines. And those guidelines are based on clinical medical research and a randomized controlled trials. Brian:    Mmhm. Joost:    So they get very pure research outcomes as you would call them, saying well given somebody has an infection, a urinary tract infection for example, these randomized controlled trials have shown that this antibiotic, say antibiotic number two works best, so give that. And that's very nice, but the problem is, and this is where the Donald Trump statement comes in, it's that had this modern Western medicine, this a randomized controlled trials, they are incredibly sexist and racist and they discriminate against the sick and the elderly and the young and basically anybody who isn't incredibly healthy, young and present. Because what happens is to do a study like this, you need to have a study population and well let's ask you a question in between. Would you test the new drug on an eighty year old person? Brian:    Would I personally test a drug on an 80 year old person? Joost:    Say you're a doctor allowed to test drugs on you on people? Brian:    I'm not sure I have enough information to know yet. I'd be afraid to test anything on a human being without having a little bit more information I guess. So, I'm going to deflect your question. Joost:    Good call, good call. So yeah, if we have a medication or a treatment that we're going to test, You can't really test it on groups who are at a higher risk of negative side effects. Brian:    Mmhm. Joost:    So you won't treat it on for example pregnant women, on the elderly, you won't test something new on very young children for example. And also you're going to get a study population from the people that have this certain disease map predominantly. So, and I'll just get to an example because I think that will make it a lot clearer. We developed a product that helps, well it's a decision support tool but it gives advice on which type of antibiotic, and there's say eight of them for this particular problem, which antibiotics should you give for a urinary tract infection. And what we saw there is that these decisions are now based on medical guidelines and those medical guidelines are based on say a 30-year span of doing medical research. Joost:    And when we read those papers on which is that art is of medical research, we saw that approximately 5,000 young Caucasian healthy, non-pregnant women were included, some 70 pregnant women, zero patients with like a severe a urinary tract infections, zero patients with what you call co-morbidities or co medication. So that has auto diseases or taking auto medications, Brian:    Mmhm. Joost:    And as is not often the case in other parts of world no man included. And that is of course a bit of a problem because then you're going to base the guidelines on that study population, but you are then also going to extrapolate its results on patients that were never included. Brian:    Right. Joost:    And we know from research that drugs, they have a different effect on different types of people. So we know there's a big difference between Asian people and European people for example, and how they metabolize certain drugs. Brian:    Mmhm. Joost:    So not to have the idea that's an antibiotic that would work for, say 25-year olds, a young Caucasian woman would work equally well for an Asian man of 70 years old who was overweight and is taking heart medication, that's just not the case. Brian:    Mmhm. Joost:    So what we did to build this product is we got the actual data from say free 300 thousand patients who went to a GP with a urinary tract infection. And indeed what you see in the data is that these people that are included, and the original study population, that's indeed 50% or so of the population, I sort of the majority, but the remaining 50% is types of patients that are not included in this original study. So we make a machine learning model out of this. Joost:    And basically what you get from that is you get personalized individual guidelines as stating which medication to prescribe and guidelines that also can be used for patients that you are not able to include in classical scientific medical research. That's of course not saying that there is something wrong with how this classical medical research being done. This is, as I explained, you can't include these types of patients in a study, Brian:    Mmhm. Joost:    But you do got a gap because of that between the study population and reality. Brian:    I'm curious, so in the interfaces then that you provide, did you find that you need to present like a recommendation that's based on the original trials or studies that were done in addition to a recommendation that's based on your algorithms so that it's almost like there's an itch there, and I imagine when you start showing a treatment recommendation that feels foreign to a healthcare practitioner who's always done it based on the research, there's some friction there, right? Like maybe there's a trust issue. So do you have to show both of them? And then there's some explanation of like why the personalized recommendation is this. The historical generalized study says this and here's why we recommend the personal one is because X, Y, and Z and you know the trial didn't factor in overweight men who are Asians. Like can you talk to me a little bit about that? Joost:    Yeah, yeah. So actually you did most of the explaining because that is indeed exactly what we did. So we build them as this is an important thing to state. So also in this case we make decision support, so we don't say take antibiotic number two, Brian:    Right. Joost:    We say these are the eight possible antibiotics. And we say these are the probabilities of them curing the disease. Brian:    Sure. Joost:    So indeed you see the different probabilities of success. And then also we say in that graphical interface, well these are for example, the two antibiotics that the current guidelines would recommend. Brian:    Mmhm. Joost:    And so of course this is also a very interesting thing for this. For the people that are included in the study population, you find identical results. So also our model says that the highest probability of success would be achieved by getting the antibiotics that are also prescribed by the guidelines Brian:    Mmhm. Joost:    Because of course that is a study population that was included. Joost:    So indeed, we showed the probabilities of success for all the different antibiotics. We show what the current guidelines would prescribe and then we also show, yeah basically how this predictions came about. So for antibiotic two, we say well based on the fact that in our data we have 13 thousand, highly comparable patients who had also these characteristics. We saw that this medication worked in 90% of the cases and that's why we recommend this. Brian:    Mmhm. Joost:    What is interesting here I think is that as you stated there is some friction but, and I have to see whether I can remember the phrase but there are these medical guidelines but everybody knows that they are at, it's in the name, they are guidelines. Brian:    Right. Joost:    So if you have a grounded reason to deviate from those guidelines then that is actually, well it's not really expected but that's good practice. Brian:    Mmhm. Joost:    So of course there will be some friction in terms of doctors who say well I don't really know what to do with something else then these guidelines but especially the more and the somewhat older and the more experienced doctors, they are very used to deviating from the guidelines based on their own experience. Brian:    Mmhm. Joost:    So they might actually see that and those prior deviations that I did are now also what would come from a model predicting something. Brian:    Mmhm. I think it's fascinating to see, you know during the process of validating a tool or application like this to see how doctors, I'm totally self-reflecting here. Like if I was working on this, how it would be really interesting to see how doctors react to new treatments or a path that they hadn't really thought about or maybe they were suspicious about, but the tool provided a level of evidence or clarity that really turned them onto it and made them kind of reevaluate maybe their past methodologies for treating a situation like that. I could see there's a really cool educational aspect there that could be fun to, to monitor if you were testing the product out with someone. I don't know if you guys have done much of that yet to kind of see how they react to you know a surprising probability recommendation or whatever. I know you don't call it a recommendation, but a you know a probability for a particular you know remedy. Joost:    Yeah. Brian:    So I know this sounds like a really interesting place to be. Joost:    Yeah, most definitely. I think a very interesting example in that feel is of course, the very interesting thing from machine learning on medical data is that you could get that the model find something that the doctors don't know yet. Brian:    Right. Joost:    Where did they don't recognize and then hopefully they say, "We don't recognize this, this is wrong." And then we are challenged to really worked through it and validate that it is indeed correct. Brian:    Right. Joost:    And if it then still holds, then it means that we've found something new that's very interesting. Brian:    Right. Joost:    So we did a project, we didn't really make a product out of it, but on a an analysis project where we worked in a psychiatry department where we looked at the effectiveness of antidepressants combined with sedatives, is sedatives a proper word here in English? Brian:    Yes. Mmhm. Joost:    Yeah. So they are very often given in combination. However, this is also what psychiatrist would say. If you would ask 10 different psychiatrists to treat one patient, you would get 10 different treatments. So there's a lot of intuition going in there as well. Brian:    Mmhm. Joost:    And at some point we found in our data analysis that we showed well for this type of patients who part of their depression laid in fact that they had insomnia and trouble sleeping, which is a severe driver of depression. If they were given a certain type of antidepressants combined with a certain type of sedatives, that did not work or it worked way less effectively than it should. Joost:    So we identified there that we said, "well, all of these different, this combination works on the sleeping capabilities of this patient, we think." And that that was all very new, and we presented this to a group of psychiatrists when we wrapped up the project. And one of the psychiatrists stood up and he said, "I was at a conference last week in the United States and they presented a paper here, that they just published that talked about this correlation." Brian:    Wow. Joost:    So that was very interesting that our model was able to pick up on something like this that then, and luckily in that case was also found in more classical medical research 10,000 kilometers away. Brian:    Yeah. Joost:    So those are of course the very, very cool things. If you could find something new like that. Brian:    Yeah, that sounds, that sounds exciting. Like it's got to be really fun when you guys land on these little gold nuggets here. You know? Even if they do need more research, right? Joost:    Yeah. Brian:    Like if they don't become you know necessarily the status quo, but they feed a new line of inquiry or research to go out and really do a deeper study to dig into some causality. Right? Like something beyond a correlation. Joost:    Yeah, that is of course I have a little bit, the models that we build are, they will mainly be a correlation driven. Brian:    Mmhm. Joost:    We have with impact matter research group that looks into causal inference. But as for now it's mainly correlation, but of course in there, there will be hidden some, some causal relations. So if you could provide a nice starting point to do a more classical scientific research project and indeed prove or disprove a causal relation somewhere in there, then that's of course a very cool starting point. Brian:    Yeah. Yeah. Well Joost this has been a great conversation. I've been really enjoying listening to kind of your experience here, working with medical you know professionals and figuring out how to improve health outcomes here. I wanted to ask one last question here. Again, kind of thinking about the user experience piece here, and I think we covered this in our call a little bit, like our little planning call, but was it correct that you said that unfortunately or maybe fortunately depending on the situation, you don't take any input data after a suggested course of action is presented? Like here are five different scenarios and the probabilities that they may treat this patient successfully. The doctor doesn't input the treatment that was used and you don't then go and see did that actually work and then feed that back into the model and then that becomes part of the product, and the recommendations, is that correct? Because there's either GDPR or there's health regulations that the model can't be trained on any new information except what was disclosed at the time that you got your approval. Did I say that right? Joost:    Yeah, Indeed. So what we do is, because we work under this CE certification, in the States FDA regulation, Brian:    Mmhm. Joost:    To be able to have on the market a medical technology, you'd need to have an incredibly detailed technical file that explains every single yeah detail of this product, Brian:    Uh huh. Joost:    Which means that you also, until a deep level of detail have to explain what's your training data looks like. And so that means that this life continuous optimization of your models whilst they are running isn't really possible in our scenario. Brian:    Mmhm. Joost:    That doesn't really have to be the biggest problem because you can simply just say, "Okay, we will collect the data that's being generated in practice and retrain a model every three or six or 12 months or something like that. Brian:    Uh huh. Joost:    But I think that a lot of the data science, because you're making algorithms or you're coding stuff really adheres to this lean startup methodology rides where you can very quickly iterate and test and retrain and test again. Brian:    Mmhm. Joost:    Those sort of timelines and development speeds aren't really possible in the medical realm. Brian:    Right. Yeah. Well the risk factors are obviously different, right? Joost:    Yeah, and it's- Brian:    Just a software bug. Joost:    Yeah, definitely. And it's [inaudible 00:49:24]. Luckily it is heavily regulated, Brian:    Right. Joost:    Actually in terms of what you stated that the treatment that was chosen and then what happens that they can't put that in the data actually that's one of the biggest challenges that there is in medical machine learning is that the outcome measure in general often isn't in the data. Brian:    Right. Joost:    Nobody checks a box that says treatment worked, yes or no. Brian:    Right. Joost:    So really determining and defining and refining an outcome measure in our data that is an insanely large amount of the work that we do, Brian:    Mmhm. Joost:    Which is both interesting and challenging. Brian:    Yeah. I can imagine. Well, thanks again. Yeah. This has been great. Tell the listeners where they could follow you is like following you on LinkedIn or are you on social media at all? Like if people wanted to keep in touch with what you're doing? Joost:    Yeah, definitely. So I think you can follow us on LinkedIn, which is a Pacmed, which is P-A-C-M-E-D. Not to be confused with Pacmed means Pacific medical center. We get about a couple of thousands of new colleagues every month from people that get a job at the Pacific medical center and subscribed to our LinkedIn. Brian:    It's pacmed.ai is that correct? Joost:    Yeah. That's our website pacmed.ai, which offers all our cool new developments and all our new stories. So yeah, if you're interested, definitely, have a look at the website. Brian:    Cool. And you're also on LinkedIn, is that correct? Joost:    Yeah, definitely. Brian:    Cool. All right. Yeah, I'll find your LinkedIn and put you on there. And this has been a really great conversation. So thanks for sharing some information about what's happening with machine learning and data science in the medical field. So this is great. Joost:    Yeah. Cool. Likewise, thank you so much for having me.

Other Episodes

Episode 0

January 26, 2021 00:56:50
Episode Cover

057 - How to Design Successful Enterprise Data Products When You Have Multiple User Types to Satisfy

Designing a data product from the ground up is a daunting task, and it is complicated further when you have several different user types...

Listen

Episode 0

June 04, 2019 00:41:38
Episode Cover

014 - How Worthington Industries Makes Predictive Analytics Useful from the Steel Mill Floor to the Corner Office with Dr. Stephen Bartos

Today we are joined by the analytics “man of steel,” Steve Bartos, the Manager of the Predictive Analytics team in the steel processing division...

Listen

Episode 0

August 27, 2019 00:45:03
Episode Cover

[

Ahmer Inam considers himself an evangelist of data science who’s been “doing data science since before it was called data science. With more than...

Listen