In this episode of Experiencing Data, I sat down with James Taylor, the CEO of Decision Management Solutions. This discussion centers around how enterprises build ML-driven software to make decisions faster, more precise, and more consistent-and why this pursuit may fail.
We covered:
“If you’re a large company, and you have a high volume transaction where it’s not immediately obvious what you should do in response to that transaction, then you have to make a decision – quickly, at scale, reliably, consistently, transparently. We specialize in helping people build solutions to that problem.” – James
“Machine learning is not a substitute for hard work, for thinking about the problem, understanding your business, or doing things. It’s a way of adding value. It doesn’t substitute for things.” – James
“One thing that I kind of have a distaste for in the data science space when we’re talking about models and deploying models is thinking about ‘operationalization’ as something that’s distinct from the technology-building process.” – Brian
“People tend to define an analytical solution, frankly, that will never work because[…] they’re solving the wrong problem. Or they build a solution that in theory would work, but they can’t get it across the last mile. Our experience is that you can’t get it across the last mile if you don’t begin by thinking about the last mile.” – James
“When I look at a problem, I’m looking at how I use analytics to make that better. I come in as an analytics person.” – James
“We often joke that you have to work backwards. Instead of saying, ‘here’s my data, here’s the analytics I can build from my data […], you have to say, ‘what’s a better decision look like? How do I make the decision today? What analytics will help me improve that decision?’ How do I find the data I need to build those analytics?’ Because those are the ones that will actually change my business.” – James
“We talk about [the last mile] a lot … which is ensuring that when the human beings come in and touch, use, and interface with the systems and interfaces that you’ve created, that this isthe make or break point-where technology goes to succeed or die.” – Brian
Brian: All right, everybody. Welcome back to Experiencing Data. This is Brian T. O’Neill, your host, and today I have James Taylor on the line-wait, not the guitarist. Not the-and I’m sure you hear this all the time-I really like James Taylor, actually. You are the CEO of Decision Management Solutions, a little bit different. Tell us what is decision management and what does it mean to have solutions in the decision management space?
James: Sure. So, it’s great to be here. So, Decision Management Solutions. The bottom line here is if you’re a large company, and you have a high volume transaction, where it’s not immediately obvious what you should do in response to that transaction, then you have to make a decision, and you have to make a decision quickly, at scale, reliably, consistently, transparently, all those good things, and we specialize in helping people build solutions to that problem. How do you build software solutions, primarily, that address those issues and let companies handle their decision making more quickly, more precisely, more consistently?
Brian: Got it. Got it. And I know, one of the things you talk about in your work which appealed to me and made me want to reach out to you is, I use the framing ‘the last mile.’ We talk about this a lot in the work, which is ensuring that when the human beings come in and touch, use, interface with the systems and interfaces that you’ve created, that’s kind of the make or break point where technology goes to succeed or die. So, talk to me about this starting at the endpoint from your perspective. I want to hear how you frame this in your own words.
James: Sure. So, an experience-and this is backed up by various surveys is that the typical analytical project has problems at both ends. People tend to define an analytical solution, frankly, that will never work because it’s the wrong solution; they’re solving the wrong problem. Or they build a solution that in theory would work, but they can’t get it across the last mile. And our experience is that you can’t get it across the last mile if you don’t begin by thinking about the last mile.
And so, both problems are in fact indicative of the same problem, which is that I need to understand the business problem I’m going to solve before I develop the analytics, and I need to make sure that I deploy the analytics so that it solves that business problem. And for us, what we would say is, “Look-” ask people why they want to use data, why do they want to use analytics, and they will always tell you, “Well, I want to improve decision making.” “Well, just decision making in general, or a specific decision?” “Well, normally a specific decision or a set of specific decisions.” And we’re like, “Okay, so do you understand how that decision is made today? Who makes it? Where do they make it? How do they tell good ones from bad ones? What are the constraints and regulatory requirements for that decision?”
If you don’t understand those things, how are you going to build an analytic that will improve that? And so we like to begin-we often joke that you have to work backwards. Instead of saying, “Here’s my data, here’s the analytics I can build from my data. Now I’m deploying these analytics to see if I can improve decision making.” You have to say, “What’s a better decision look like? How do I make the decision today? What analytics will help me improve that decision?” Now, how do I find the data I need to build those analytics because those are the ones that will actually change my business. So, we often talk about working backwards, or the other phrase I use a lot is I misquote Stephen Covey; “You have to begin with the decision in mind.”
Brian: Yeah. No, I’m completely in agreement because that’s-the decision-making forces you to also really focus on the problem and to get clarity around what does a positive outcome look like for the people who care, the ones that are sponsoring the project or are creating the application, or whatever the thing is. It forces you to get really concrete about that and to get everyone bought in on what success looks like? And it just makes the whole technology initiative usually a lot easier because there’s not going to be a giant surprise at the end about like, “What is this?” [laugh].
James: Yes, exactly. Yes.
Brian: This doesn’t help me do anything. You know? [laugh].
James: Exactly. No, I have lots of stories about this, as I’m sure you do. I think one of my favorites is this guy who called me up and said, “Did my company build churn models?” And I’m like, “Well, yeah, we can help you build a churn model. I’m curious, why do you need to churn model? What is it for?” And he said, “Well, we have a real customer retention problem and a churn model will help me solve this problem.” And I said, “Okay, so humor me. What decisions are you responsible for? What are you responsible for in the organization?”
And it turned out, he ran the save queue in a telco. I don’t know if you’ve heard the expression save queue, but it basically means that people you get transferred to in the call center when you say you want to cancel your service. And so these are the table whose job it is to persuade you not to cancel your service. So, he’s telling me this. I said, “That’s it? That’s your scope?” He said, “Yes. That’s the only bits I can change.” And I said, “Okay, in that case, I can give you a free churn model.” “You can?” “Yes, churn equals one.”
Brian: [laugh].
James: Because absolutely everyone you speak to has said they want to cancel their service. Therefore, they are at a 100 percent risk of churn. [laugh]. So, yes, you have a churn problem, but no, a churn model will not help you.” [laugh]. Other models might, but that’s not going to help you because way too late for a churn model. You know?
Brian: Yeah, yeah, yeah. No, I [laugh] totally understand. So, one thing I wanted to ask you about, and I know that design thinking as part of your work, you have a flavor of that that I want you to go into, and maybe this question will force you to do that. But, one thing that I kind of have a distaste for in this space, in the data science space, when we’re talking about models and deploying models is thinking about operationalization as something that’s distinct from the technology building process. I tend-when we think about a system’s design, we would say, “Well, that’s integral to the success of the product.”
And it may not be the job of the literal data scientists who built a model to be responsible for all of that, but to not even be considering it or participating in that, or for whoever the, what I would call the product manager, the data product manager, whoever the person who’s in charge of this, this is a real problem if you’re just working in isolation here. So, do you think operationalization should really be a second and distinct step from the technology, or should that be integral to thinking about it holistically, as a system? This is a whole system, there’s multiple human beings, departments, technologies, engineering, all kinds of stuff involved with it. What do you think about that? I don’t-am I crazy? [laugh].
James: Yeah, I mean, we talk a lot about operationalization, but we would very much, as you do, regard it as part of the project. If you haven’t operationalized it, you’re not done. One of my pet peeves is where data scientists say, “Well, I’m done.” And I’m like, “Well, no, you’re not.”
I had this great call a journalist call me the other day and she was asking about an interview, and I was being my usual cynical self about AI and machine learning, and she said, “Well, how do you explain notable AI successes?” And I’m like, “Well, give me an example of one.” And she said, “Well, the AI that successfully identified tumors in radiology scans that humans had missed.” And I’m like, “I’m going to challenge your definition of success.”
She said, “How can that not be a success?” I said, “Because as far as I know, they haven’t treated one patient differently because of it. No patient is healthier today because of that AI. Therefore, they’re not done yet. They may well yet make it successful, and I’m intrigued by the potential, but it is not yet a successful AI because it has not yet improved anybody’s health outcome.”
And that was its [00:07:52 unintelligible]. Because I’m with you: it’s not operationalized. We’re not finished. You can’t declare victory [laugh] at that point. You have to be finishing it.
And we often use the CRISP-DM framework-you know, the business understanding all the way around-and one of our key things is, when you get to the evaluation stage, one of the reasons you need to understand the decision making is that you should be evaluating the model in terms of its impact on the decision making, not just on it’s a match to the training data, or it’s lift, or the theoretical sort of mathematical ROC and all these things. Those are all important, but you should then also say, “And when used in the decision making-” which we understand well enough to describe how it would change thanks to our model because we started by understanding that-“And will be the business impact of deploying the model.” And if you can’t do that, then why did you bother building the model? It’s not anybody else’s job but yours to explain the impact of your model on the business problem that your model is designed to solve.
So, I’m with you. It is a separate set of tasks that need to be included, but the idea that you’re going to have a separate organization do it, I disagree completely with that. I think it’s got to be part of the machine learning team, and I think machine learning teams need to hold their members accountable-models are successful. And you get a tick, “Very good” on your resume, or your internal job descriptions on when the model is deployed and not before. I don’t care how talented you are, if I can’t deploy the models you build, well, then there’s a problem.
I need you to be engaged in that process. And I think that if you read the stuff about MLOps, you know, MLOps and DataOps, AnalyticOps and stuff like that, and I read these descriptions, and I’m like, this is all stuff the IT department already does. There’s not a single task in this list of MLOps-y things that isn’t done already by the IT department. So, the reason you want to add this to your ml tool is because you just don’t want to talk to the IT guys.
If you were willing to actually go talk to the IT guys, they’ve got data pipelines. They’ve got ways to do any of these things. Now, some of them aren’t scaled for the kinds of things that analytics people need. I’m not saying it’s a complete thing, but just this idea that somehow that the machine learning team has to create all this stuff from scratch, I think it’s just because they don’t want to talk to the IT guys [laugh]. I want to be able to stay in my little bubble and do my little thing, and not have to interact with people. I think that’s a-in a big company, in the end, you’re going to have to talk to the IT guys so you might as well get over it and go talk to them now.
Brian: Do you think that’s a-I mean, to me, that sounds like a management problem. It’s either hiring the wrong people, or not providing the right training, or not explaining like, “This is what the gig is.” [laugh]. The gig is really-it’s this story. It’s this change that we want to push out.
It’s about making the change. It’s not about exercising technical proficiency alone. That is not what the game is that we’re playing on this team, if you want that there’s a place to go for it, but I feel like this is just a skill that it’s not natural because there’s a lot of other skills that come into play into doing that well. And it’s-I don’t know if that’s your experience as well, and I wonder if that’s just a management change that needs to happen. If it’s coming out of people individually that want to do this.
I mean, I have students in my seminar, sometimes, that come in, and I would say they’re curious, but oftentimes, it might be a leader wants them to develop this skill outright and they’re a little bit resistant to it because they think they’re there to do something else, which is more academic in nature. From my perspective, that’s how-I was like, “This sounds like academic research work that you want to do with perfecting a model or-” and then you end up with these, what I call, ‘technically right, effectively wrong.’ Right? IT’s 92 percent accurate with 1 percent use? You know? [laugh].
James: Yes, exactly. I would agree with you; I think it is a management problem. I think that the big challenge with this often is that the senior management have thrown up their hands. “It’s all too complicated, I’m not going to engage with it.” And so they say, “Well, here’s our data. Tell us something interesting.”
Or they hire people who think it is their job to use machine learning, and AI, and do these cool things, and they go off and do them. I remember talking to one big company had hired this big group of AI and machine learning folks. They were spending a ton of money on AI and machine learning. And we had a conversation within a group where we were proposing a slightly different approach.
And they were like, “Well, we’re not going to do that. We’re going to use the AI and machine learning group to do it.” And I’m like, “Okay, well, help me understand how they’re going to help solve this problem.” And so, I’m pushing on them, “Well, how are you going to solve this problem?” “Well, we’re going to use AI and machine learning.” “I know. How are you going to solve-this person, she’s sitting right here. She runs this claims group. How are you going to solve the problem she just described? What kinds of machine learning and AI? Apply it how?” “Well, we’re going to use AI and machine-” I mean, really, this person had no idea, no interest in the business problem. “We’re going to use AI and machine learning. We got a big budget, big team, rah, rah us.” And I’m like, “And your job here is, apparently, to spend money on machine learning and AI.” [laugh]-
Brian: Yeah, and this-
James: -not, in fact, to make any. You know?
Brian: [laugh]. Well, that’s the thing. It’s like, do you want and I think, especially for leaders in this space, it’s like, do you want to be associated with a cost center? Or do you want to be a center of excellence and innovation? And I think the C-level team is thinking that AI is a strategic thing, we have all this data, everyone else is doing it, I need to be on the race.
And some of this is hype cycle stuff. Some of this is legitimate, that they should be thinking about this, but the assumptions:, “Well, hire this team, and then I’m going to get all this magic dust falling from the sky.” And it doesn’t happen that way. Like, with any technology, it really doesn’t happen that way. You have to think about operationalizing it, you have to think about the experience of the people using it. And what’s the story? How do you change people’s mindset? How do you change existing behaviors? And what’s people’s fear and trust? There’s all these aspects that are human things.
James: Many years ago, when I was a young consultant, I was working, writing a methodology for IT projects. And then we merged it with an organizational change methodology. And this old organizational change consultant-he seemed really old at the time, but he’s probably my age now. But at the time, I was young, so he seemed really old-and he said, “James, you technology people are so funny.” He said, “You think it’s all a technology problem. It’s always an organizational change problem.”
Brian: Yeah.
James: And now I’m old, I have to say that he had a certain point. It’s always going to be an organizational change problem. And that is, in fact, the number one issue. And we lovingly refer to our customers as big boring customers. They’re big boring companies.
And if you’re a big boring company-and most people work for big, boring companies-you’re constrained by all sorts of things. So, I was just talking to an analytics head, a very, very smart guy, very interested in machine learning, works at a big bank. And one of the teams that he’s working with, one of the platform providers said, “Well when it comes to making offers to people based on responding to events with an offer, why don’t you just let the machine learning learn what works best?” And he’s like, “So, if you’re looking at, say, the mortgage page, to begin with, it would just randomly pick an offer: a car loan, a savings account, right? It’s got nothing to do with the fact that you’ve spent the last 30 minutes looking at the mortgage, that’s not going to give that customer a good sense of the bank.” I need to constrain those offers to at least the ones that make some kind of sense. And if I know which customer it is, to at least eliminate the ones that they’re not allowed to buy.
Brian: Or a product they have already.
James: A product they have already. Or a product that they can’t have because they don’t have another product that it relies on, or a product they can’t have because they’ve already got another product that’s considered by the regulator to be a comparative product and you can’t own both. And on and on. And so, “No, I’m not just going to let the machine learning model pick. I want to decide some structure for this. And then I want the machine learning to help me get better at it.”
And so that’s a persistent thing with us is, machine learning is not a substitute for hard work, for thinking about the problem, understanding your business, doing things, it’s a way of adding value. It doesn’t substitute for things. It adds more insight, more precision, new opportunities, and so on, but this idea for most big companies that they can throw out what they already know and just replace it with machine learning, I think is, as you say, part of the hype cycle. I’m just going to spend a lot of money and getting nowhere.
Brian: Yeah, yeah. Talk to me about how you implement design thinking-or whatever you want to call it-this process of human-centered design into your work, and why does it matter? Have you seen results from it? How does it connect? Are clients resistant to that?
James: Sure. So, one of the things we find is that, obviously, the principle of design thinking, there’s several things. You want to be very focused on the people involved, and you want to prototype things, and you want to show people, not tell them. All this kind of standard design thinking stuff. And what we found is when it comes to decision making-because if you’re trying to do analytics, you’re trying to improve decision making.
Well, you can prototype UIs if there’s going to be a UI involved and stuff, but what you really want to do is you want to understand the decision making because that’s the thing you’re trying to redesign. And so we use decision modeling. So, decision modeling is a graphical notation for drawing out how you make a decision. So, sort of a way of sketching out what are the pieces of your decisions and the sub-decisions and sub-sub-decisions, and yeah. Just like a process model describes a process, and a data model describes a database, a decision model describes a decision.
And so you lay out this decision model and we always begin by asking people, how do you decide today? Don’t tell me what you’d like to do or what you think you should, but what do you do today? What should you do today? And obviously, we say ‘should’ because sometimes there’s inconsistencies or whatever. But we really try and start with that and define that.
And what we find is that no one’s ever asked them this before. People have said, “What data do you need?” Or, “What kind of analytics could we give you?” Or, “How would you like the UI to look?” Or, “What kind of report do you need?” But no one’s actually said, “So, how do you decide to pay this claim and not pay that claim? How do you decide to lend Brian money for a car and not lend James money for a car? How do you decide?”
And it turns out, like in any design thinking, people really like to tell you. [laugh]. So, they tell us, and then we build a decision model. So, now we have a model that’s a visual representation that they can say, “Yes, that looks like how we decide, or how we ought to decide today.”
And this gives us a couple of things. It means we can actually prototype the decision making because we can say, “Well, okay, let’s take a real example of a real customer, how would you decide their lifetime value? How would you decide their credit risk? How would you decide which products they’ve already got? How would you-” we can work our way through the model, essentially prototyping how that decision would really work for a real example. So, you can really get a very robust understanding of, “Okay, this is really where you are today, how you decide today.”
Well, now you can ideate. We have this game we play called the, “If only game.” And we’ll say, “Well, okay, fill in the blanks. If only I knew blank, I would decide differently.” And then people will go, “Oh, hm. Well, if only I knew who had an undisclosed medical condition. If only I knew who wouldn’t pay me back if only I knew who had life insurance with another company.” Okay, well, now the machine learning team can go, “Okay, so what if we can predict that and how accurately we might have to predict it?”
And then you can ask all sorts of interesting questions because-I’ll give you a concrete example, we did the exercise with some folks here in disability claims, and they were trying to decide if they could fast-track the claims. Fast-tracking just means we just sort of send you a few emails, and then we pay you right? We don’t go through a big interview, have a nurse come visit, the whole production. And they’re just trying to fast-track these claims.
And so we asked this ‘if only’ question. She said, “Well, if only we knew whether your claim matched your medical report.” You’re claiming for something-yeah, you’ve broken your leg or whatever it is-does your claim match the medical report that you attach to the claim? In other words, does the medical report say you broke your leg? And the analytics team were like, “Well, we’d started looking at text analytics to analyze the medical reports, but we assumed you’d want to know all the things that the medical report said. You know, ‘what are the things wrong with you that are in this medical report?'”
And she said, “No, no, no, no. I just need to know the one you’re claiming for is in the medical report.” And the analytics team is like, “Oh, well, that’s a lot simpler. That’s a much easier problem.” Because you don’t care if you have diabetes, or you’re overweight, or you’re [00:20:24 unintelligible]. I actually don’t care. I just care that you have, in fact, broken your leg. Okay. And they said, “All right. So, if we did that, how accurate would it have to be?” And she famously said, “Better than 50/50.” And they practically fell off their chairs. And I kid you not, they made her say it again while they recorded her-
Brian: Wow.
James: -because they didn’t really believe her. And she’s like, no. I mean, it’s a fast-track process. We’ve already got other bits of the decision that eliminate people who’ve got mental health issues, or long term care issues, or-we never fast-track those. So, we’re just looking at the ones which we might fast-track.
And for the ones we might fast-track, if the medical report probably says you have the same thing you’re claiming for, that’s good enough to fast-track you because we’ve got steps later to double-check all this stuff. We’re not going to pay you because of this decision. We’re just trying to fast-track it and avoid cost in the process. And, frankly, it’s a sniff test. As long as it’s better than 50/50, we’re good. So, 60/40, something like that would be great.
And of course, the analytics team, I talked him afterwards. I’m like, “So, you originally had this plan to build a minimum viable product to come up with-” I forget if it was 85 or 95-“Percent accurate assessment of all the conditions in the medical report that was going to take you the rest of the year-” This was like in February or something. “How long is this going to take you?” They’re like, “You know, we’ll probably be done in a couple of weeks-” [laugh], “-because we just need to do a very rough and ready. You say this is what’s wrong with you, it’s got an ICD-10 code, how likely is it that this medical report includes that?” And so we collapse the minimum viable product from nine months to a few weeks because that’s what she actually needed to improve that decision. And that to me was like, “Okay, this is why we do it this way.” [laugh].
Brian: Did you get a sense of, like, skin crawling that, like, “50/50? But that’s just guessing, practically?” Was there a sense that-
James: Oh, yeah.
Brian: -how could you possibly accept a double f minus? Like if we’re in school, right, that would be like, you’re so far from failing? And it’s like, no, it’s just 51 is good. [laugh].
James: Exactly. Exactly. No, it was-
Brian: Did that make people’s skin crawl?
James: -difficult. Yes. For sure. The group split into two, there’s definitely the ones who are like, “Cool, that means we’ll be able to get some value out of this quickly, get a minimum viable product-” more agile thinking types-“And we’ll be able to go on to something else.” And there were other people who were clearly extraordinarily uncomfortable with the whole notion, who were very uncomfortable with the idea that this would be useful and were like, “Well, surely more accurate model would be better?” I’m like, “Well, probably, but probably not a lot better because she’s only got two choices. She can fast-track it or not fast-track it.” And at some point, the extra work you spend to make it more accurate is just not worth the payoff.
Brian: Tell me about prototyping and design. You talked about decision-centric dashboarding as well, so talk to me about when you get down into the interfaces and how do you prototype and test this stuff to know that it’s going to work before you commit?
James: So, one of the things we focus on is we’re typically focused on high volume decisions. So, what we have found is that when you’ve got a transactional kind of decision, a decision about a customer, about an order, about a transaction, you’re much better off applying analytics to an automated part of the decision than to the human part of the decision.
So, what we tend to do is we’ll build these decision models, and then we’ll identify an automation boundary: which bits of this model can we automate? And then we’ll try and capture the logic for that decision making, as it stands today. So, that now we’re not making the decision necessarily any better than the way we used to, but now we’re making it repeatedly, and repeatably-you know consistently-and we can start to generate data. So, we start to save off how we made that decision.
So, we’ll say, “Well, we made this decision to pay this claim because we decide your policy was enforced, we decided the claim is valid, we decided there wasn’t a fraud risk, and we decided it wasn’t wastage. And we decided your policy was enforced by deciding these things. And we decided that your claim was valid by deciding these things.” And we have that whole structure for every transaction. Now at that point, we haven’t done any analytics, but we have got control of the decision.
Then we start to say, okay, which bits of this decision model could be made more accurate by applying machine learning, a prediction, or so on. And then what we’re trying to do is then tweak the rules, take advantage of that new prediction. So, a rule might have said I’ve got a set of red flags: if you’ve ever lied to me before, if you went to a doctor who’s lied to me before [laugh], if you went for a service that I have lots of issues with, then it got red-flagged, and so it’s going to get reviewed. But if it didn’t get a red flag, I’m interested in whether you could predict that it should have got a red flag.
So, this is the first time this doctor’s lied to him, but he’s got the characteristics of a doctor who’s going to lie. This is the first time you’ve lied to us, but you have the characteristics of someone who might lie to us. This is the first time we’ve had this treatment, but it smells like the kind of treatment that gets red-flagged a lot. So, I don’t have an explicit reason to reject this claim. But perhaps you can predict that I probably ought to at least look at it.
And then we change the rules slightly. So, now the rules say, “Well, if you had a red flag, it goes to review and if you didn’t have a red flag, but this predictive model says, ‘smells bad. Looks too much like an outlier. Looks too different from the usual run of the mill stuff,’ then we’ll review it anyway.” So, you start to add value by identifying things that were missed in the current explicit version of it.
So, we very much focus on automation first and improvement second. So, to your point, we will prototype the models, but then generally the next step is to automate a chunk of it, so that we can start to run simulations and automate the decision making, and then start applying analytics to it. And the dashboards we build are mostly about how we made decisions, not about making a decision. It’s like, “Here’s a dashboard that shows you how the decisions were made so you can improve your decision-making process.” Rather than, “Here’s a dashboard to make a decision.”
We do those occasionally, but we often find that what people think mentally is if there are humans and automation involved in a problem, that they think, “Well, the machine is going to make a bunch of decisions, and then I’m going to make the final choice.” And we often find the reverse is true. I need you to make some key judgments about this customer, or this transaction, or this building. Once you’ve told me what those are, I can wrap those into an automated decision and do the ‘so what’ because the ‘so what’ is pretty well defined. We have one example with a logistics effort where they had to pick a ship for a given logistics shipment.
And there’s all this stuff in there about predicting arrival times and departure times whether it’s-obviously, there’s a bunch of rules about is it the right kind of ship? There’s some analytics. Predicting its arrival time: is it going to be available? Is it likely to need repair and all those things? But then there was one last bit which is, is it seaworthy? Well, someone has to go look at the ship. But if that’s not the final decision, that’s one of the inputs, but someone’s got to go do that. And so we see that a lot more often. So, often, we’re taking predictions, taking some human judgment, and then wrapping that into an automated ‘so what’ framework.
Brian: In a solution like that, are you also tasked with sometimes figuring out that whole end-to-end process? Let’s say they’re actually does-a guy with a camera, or gal, goes to the dock and takes pictures of-maybe this is a bad example, but do you ever think about that holistic entire experience and clients are kind of realizing this is ultimately part of a decision mindset-
James: Yeah, for sure.
Brian: -is that literally we have to include that part of this thing into it and we have to make that whole experience work? And-
James: And it has to work. Exactly. Yes. Very much so. We often see, for instance, what people will start with is their first mental model of automation is, “I’m going to automate it and you’re going to override it when it’s a bad idea.” The problem with that is I don’t really know why you override it and no matter how much work I’ve done on the automation, you still have to do all the work again to decide if you’re going to override it. And one of my pet peeves is people will say, “Well, the AI will tell you whether you should pay the claim or not and it’ll be transparent about why he came up with that, and then you can decide if you’re going to pay the claim.” And I’m like, “Well, but then I have to read the claim. I thought the whole point was, I didn’t want to read the claim.” [laugh]. It was kind of the point.
Brian: Well, it depends on what the success criteria was, right? In that case, if the real goal was to prioritize what stuff needs human review because there’s so many claims coming in-
James: Oh, sure. Right, yeah.
Brian: -then the AI is providing decision support, it’s saying, “Likely, likely, likely, likely, likely.” And you’re like, “Check, check, check, check, check, check, check,” really fast because you’ve accelerated that claims processing thing if that was the goal.
James: If you trust the AI, right. Yeah. And the problem is, well, how do you develop trust in the AI? Well, you have to go look at the claim. And so [00:30:57 unintelligible] example, what we had was there was a bunch of things that could be-like underwriting that are very rules-based, as a life underwriting manual, certain things have to be true.
We ask you these questions. And then what we found is there were places where we needed to make an assessment of your risk in certain areas. And so what would happen is we would try and decide with the data you’d submitted. And then if we couldn’t decide, it would basically use a process to reach out to the underwriters and say, “Hey, look, we got this customer coming. We don’t need you to underwrite them so much as we need you to assess how crazy a scuba diver are they?” Because they said they’re a scuba diver, but we don’t really know how crazy a scuba diver they are. And we know that the algorithm requires us to know if they are a casual scuba diver, a serious but safe scuba diver, or a nutjob. You know, scuba diving alone, at night, in a dark underwater cave. Okay, so we’re totally going to charge you extra for your life insurance.
Brian: Right.
James: And so, it’s easier for you to do that, or we can’t do that, whatever. So, then what the process is doing is it’s reaching out to people to say, the decision can’t be made unless we get your inputs here. So, instead of saying I can’t make the decision, here’s all the data. [vomit noise], you decide. It says, I can’t decide because I need you to decide these two or three things. So, you’re being asked to these very specific judgments in the context of this application, but you’re not just being dumped back into the process. We’re not just throwing it over the wall and saying given up, right.
Brian: Right. Abort. And-[laugh].
James: Yeah, exactly. And then the other thing we found is that once you do that, you start to identify-well, you can start to say, “Well, what are we asking Brian to do in this circumstance? Well, we’re asking him to go look at this historical data, and draw a conclusion about trends.” “Okay, well, trend analysis is something analytics are pretty good at, so maybe we could, in fact, use a machine learning model there at least some of the time.” Versus, “No, it’s much more of a conversation with this very qualitative kind of stuff.” Then we might go. “Okay. That seems like that’s too hard to do with analytics right now. We’ll continue to ask a person.” And so by breaking out the role very specifically, you can be much clearer.
My favorite one was a medical one, where one of the key decisions for treatment selection was, “Does the patient look like they will survive surgery?” And people were like, “Well, how are you going to automate that?” And I’m like, “Well, I’m not going to automate that. I’m going to ask the surgeon.” [laugh].
And they said, “Well, why wouldn’t you just let the surgeon override a bad answer if they didn’t think you were going to survive surgery?” And the surgeon themselves who was in the meeting said, “Well, because sometimes it’s still the right answer. You’re so sick, you’re going to die if I don’t do surgery. So, I would like the engine to say, even though I put in, ‘I don’t think Brian’s going to survive the surgery,’ then it might come back and say, ‘Well, too bad. Nothing else has any chance of working except surgery; you’re going to have to go ahead and try.’ Whereas if I say, well, he probably will survive surgery, maybe it would suggest surgery more often, if I say he’s not going to just less often.”
But that’s part of the decision. It’s not an override because once I override it, then I’m ignoring all the other bits of it. And so that for us has been the key thing: once you focus on the decision at the end, you’re bringing people in to provide their expertise as part of the decision making. And so we tend to design everything from that perspective, rather than the, “How can the computer help you make a decision? Which bits of the decision do we need you to make?” And then, okay, obviously, we still have to ask, how can we help you make a good one, but what we’re really trying to do is embed your decision-making into an overall and effective decision making approach?
Brian: Yeah, sure. I think you’ve planned out clearly, too, especially in a case like this with medicine, where the experience and participation of humans in the loop-that are part of the decision making-we’re not talking about complete automation, and there’s these squishy subjective areas that you get into, and what is that process? Do you feed data back into the model? Do you kind of go offline at that point, and you take the machines best guess plus the human, and you go to some manual process? There’s lots of different ways to think about that, but you have to think about how they’re going to provide their insight and they’re thinking into the final decision to see if we’re actually doing anything of value with any of this stuff. We have to holistically think about that operationalization to be successful.
James: Absolutely, yes. I mean, and medicine is a good one because it’s the physical interaction with the patient. We’re doing some in manufacturing and some other places. And then there’s still visual inspections. There’s still a sense, right.
We talked to an independent system operator, and one of the things they were like, people tell us which plants are going to be producing power next month, we don’t always believe them. Okay, so I need a way to say, I know technically the data feed says, “These are the plants that are available for power generation next week.” I don’t think they’ll be ready because I’ve talked to the head engineer over there and I think they’ve got a more serious maintenance problem than they think they’ve got. So, I think it’s going to be at least another week.
So, I don’t want to build an optimization that assumes this plant will be ready on Monday because I don’t think it’s going to be ready on Monday. Well, you need to understand where that fits in the decision-making, otherwise you can’t really take advantage of it. You just have people who have this nagging sense that the automated system is going to be wrong. [laugh]. And that doesn’t help anybody. Then they have the problem you said, which is, it works; no one uses it.
Brian: Yeah, exactly.
James: It’s 95 percent accurate, and it’s used 1 percent of the time. “Okay. Well.”
Brian: Yeah [laugh].
James: Knock yourselves out.
Brian: No one’s applauding? [laugh].
James: Yeah, exactly, exactly. I defined in one session, I came up with this sort of dictionary definition of a valuable analytic. And I said, “A valuable analytic is one where the organization that paid for it can identify a business metric that has been-” note use of past tense-“Improved because of the analytic.” That means you have to have deployed it, and it has to have had an impact on a metric that I track as a business metric, and I can say, here’s my metric before the analytic, here’s my metric after analytic. That metric has improved, and my competitors have not seen the same improvement from environmental factors, therefore, the analytic is why my results is better.
Once you apply that, it’s like crickets. “Okay, I’m listening. I’m listening. Anyone got one?” And it’s really quiet. “Well, when we get it deployed, it will.” “Yeah, when it’s fully rolled out, it will.” “When we apply it to the other 99 percent of the portfolio, it’s definitely going to have a big impact.” “Yeah, okay. Well, call me when that happens.” [laugh].
Brian: Yeah, this is the outcomes over outputs mentality, you know?
James: Yes, yes.
Brian: That’s a big move for-it’s a big leap for a lot of-in my experience-very tactically talented people to make that change with a mindset that I’m really here to help the business organization achieve outcomes.
James: Outcomes. Exactly.
Brian: And it’s a big change.
James: It is a big change, and it’s very hard for people. They struggle with the fact that the right answer might be a very dull model, but with a very dull technique, using very small amounts of data, and that may be enough to move the needle.
Brian: That’s not what I got this PhD for. [laugh].
James: Exactly. They.
Brian: Do you know how much that cost?
James: Exactly. Yeah. And so it depends, and I’m with you: if you really want to do research, then there are organizations out there that want to hire researchers, so go work for one. Down at Microsoft, Google, these people have huge research departments working on techniques and everything else. Like Jim Goodnight used to say at SAS, he says, “You know, I need to hire PhDs,” he said, “Because I need to figure out how to make this stuff work for everybody. But you guys, you needed to work.”
And so, I think that’s a big shift for people. And I think not just for the individuals because I think your point earlier is very valid. The way these groups are structured, the way they’re motivated, the way they’re paid, the way they’re led, all of those things have to change, too, because far too many companies just say, “You guys are smart. Figure it out. Tell me what I should do.” And it’s just like, well, that’s basically a waste of time.
Brian: Yeah. No, I understand. I think that’s partly why learning how to frame problems, and sometimes this may-your job may be to go and help the room, the stakeholders, that people extract a problem everyone can agree on and help them define that. And it may not feel like that’s what I went to school for; that’s not what I was trained to do, but that is what is going to allow you to do work that matters.
James: Yes. Exactly.
Brian: Work that people care about. And someone needs to do that. And that’s a big part of what I train in my work is otherwise you’re just taking a guess, and you’re going to end up not being happy, probably. Most people like to work on stuff people care about. “Oh, we shipped this. It got used. It made a difference.” It’s so much more fulfilling, at least for me it is, to work on things people care about. So. [laugh].
James: I would agree with you completely. And I think we have to think about how we motivate, and train, and encourage folks. And I also think my obsession is the companies I work with that do the best job of this have much more of a mix of internal people that they’ve trained and external people that they’ve hired. They’ve got people who come in saying, I know that this is important to my company, you know, the way we interact with our partners, the way we interact with customers, never missing a customer order delivery date. These things matter to-I’ve been here a long time, we talk about all the time.
So, when I look at a problem, I’m looking at how I use analytics to make that better. I come in as an analytics person, I don’t know your business, particularly. Well, it is much easier for me to focus on the analytic as an outcome. That’s what my job is, right? So, I think companies also need to stop trying to just hire everybody fresh from outside and set up a separate group and think much more about how do I infuse this overall analytic effort with people who really have a feel for what matters to the company because then they’ll be focused on it.
Brian: Yeah, absolutely. James, it’s been a great conversation, and I know you have a book. And so I assume that’s a great way to kind of get deeper into your head. So, tell me about the book. Who is it for? And where can my audience get it?
James: The newest book is called Digital Decisioning: How to Use Decision Management to Get Business Value from AI. And it really is based on, like, 20 years of experience-mine and others-of trying to say, how do you effectively automate decision making so that you can take advantage of machine learning, and AI, and these technologies? And really lays out a methodology. It’s only a couple hundred pages.
So, it’s a high-level, more, aimed at a line of business head, or someone who’s running an analytics group or running an IT department who’s interested in this stuff, lays out how do you go about discovering what kind of decisions you should focus on and then how do you build these kind of automated decisioning systems and figure out how machine learning and AI fits in there. So, that’s the intent of the book. It’s doesn’t tell you how to build models, it doesn’t tell you how to write rules, it doesn’t really how to build a decision model, it just tells you when you should do those things and how they fit together.
So, it’s much more of a overview book to help get people started. So, that’s a place to start. It’s available on Amazon and Kindle. There’s a Chinese version and a Japanese version being worked on, but they’re not available quite yet. But they will be and so-
Brian: Excellent.
James: -my publisher said, “You have to write it in less than 200 pages, James.” And it is 200… pages.
Brian: 199 pages. [laugh].
James: More or less, yeah. It’s more or less exactly 199 pages. There’s a problem with doing it-as you know, a problem with doing something a lot is that you can talk about it for hours, right? [laugh].
Brian: Yeah, yeah. Sure, sure. Awesome.
James: But anyway, so yeah, that the book.
Brian: So, are you active on social media? I know you have a blog. Is that the best place to kind of watch your work or a mailing list? What’s the-
James: Yeah, you can watch the blog. The company has a website with a blog and I have a blog at JT on EDM. I’m also on Twitter, @jamet123. No S. And that’s pretty good. I’m on LinkedIn, people can find me. It can be hard to find me because my name is James Taylor and there are lots of James Taylors, but I’ve been on LinkedIn so long that my profile is actually slash jamestaylor.
Brian: Oh, sweet. All right. [laugh]. Congratulations.
James: That’s the advantage of living in Silicon Valley. Is that [laugh] I got in there early. So, you know.
Brian: That’s great. Apparently, there’s a big mar-by the way, just to end this, but did you-there’s a whole market I guess for people who get the one letter handles and then they sell them.
James: Oh, yeah.
Brian: Do you know about them?
James: This kind of stuff goes on all the time. If you ever can buy a domain name that’s got real words in it, right-
Brian: Oh, right. Yeah.
James: And I get all these complaints sometimes, that people are typing my email address because it’s at decision management solutions dot com. And someone was complaining, and he worked for one of these companies where in order to get the domain name, they basically misspell the name. Right.
Brian: Right. [laugh].
James: Yeah. And I’m like, “Really? It’s like you’re complaining about the length of my-at least they’re all real words.”
Brian: Right. Exactly.
James: I haven’t had to make something up, you know?
Brian: It’s management with an X. Oh. Okay. Excellent.
James: Yes. Yes, decision, but D-E-X-I-S-O-N.
Brian: [laugh]. Well, James, it was great to have you on Experiencing Data. Thanks for sharing all this great information about decision-making. It’s been fantastic.
James: You’re most welcome. It was fun to be here, and stay safe.
Brian: All right, you too.
Watch the Free Webinar Related to this Episode I went depth about how to address the challenges in this episode on Oct 9, 2020....
Ahmer Inam considers himself an evangelist of data science who’s been “doing data science since before it was called data science. With more than...
Paul Mattal is the Director of Network Systems at Akamai, one of the largest content delivery networks in the U.S. Akamai is a major...