012 - Dr. Andrey Sharapov (Data Scientist, Lidl) on explainable AI and demystifying predictions from machine learning models for better user experience

May 07, 2019 00:42:33
012 - Dr. Andrey Sharapov (Data Scientist, Lidl) on explainable AI and demystifying predictions from machine learning models for better user experience
Experiencing Data with Brian T. O'Neill
012 - Dr. Andrey Sharapov (Data Scientist, Lidl) on explainable AI and demystifying predictions from machine learning models for better user experience

May 07 2019 | 00:42:33

/

Show Notes

Dr. Andrey Sharapov is a senior data scientist and machine learning engineer at Lidl. He is currently working on various projects related to machine learning and data product development including analytical planning tools that help with business issues such as stocking and purchasing. Previously, he spent 2 years at Xaxis and he led data science initiatives and developed tools for customer analytics at TeamViewer. Andrey and I met at a Predicitve Analytics World conference we were both speaking at, and I found out he is very interested in “explainable AI,” an aspect of user experience that I think is worth talking about and so that’s what today’s episode will focus on.

In our chat, we covered:

Also, here’s a little post-episode thought from a design perspective:

I know there are counter-vailing opinions that state that explainability of models is “over-hyped.” One popular rationalization uses examples such as how certain professions (e.g. medical practitioners) make decisions all the time that cannot be fully explained, yet people believe the decision making without necessarily expecting it to be fully explained. The reality is that while not every model or end UX necessarily needs explainability, I think there are human factors that can be satisfied by providing explainability such as building customer trust more rapidly, or helping convince customers/users why/how a new technology solution may be better than “the old way” of doing things. This is not a blanket recommendation to “always include explainability” in your service/app/UI; I think many factors come into play and as with any design choice, I think you should let your customer/user feedback help you decide whether your service needs explainability to be valuable, useful, and engaging.

Resources and Links:

Andrey Sharapov on LinkedIn

Explainable AI- XAI Group (LinkedIn)

Quotes from Today’s Episode

“I hear frequently there can be a tendency in the data science community to want to do excellent data science work and not necessarily do excellent business work. I also hear how some data scientists may think, ‘explainable AI is not going to improve the model’ or ‘help me get published’ –  so maybe that’s responsible for why [explainable AI] is not as widely in use.” – Brian O’Neill

“When you go and talk to an operational person, who has in mind a certain number of basic rules, say three, five, or six rules [they use] when doing planning, and then when you come to him with a machine learning model, something that is let’s say, ‘black box,’ and then you tell him ‘okay, just trust my prediction,’ then in most of the cases, it just simply doesn’t work. They don’t trust it. But the moment when you come with an explanation for every single prediction your model does, you are increasing your chances of a mutual conversation between this responsible person and the model…” –  Andrey Sharapov

“We actually do a lot of traveling these days, going to Bulgaria, going to Poland, Hungry, every country, we try to talk to these people [our users] directly. [We] try to get the requirements directly from them and then show the results back to them…” –  Andrey Sharapov

“The sole purpose of the tool we built was to make their work more efficient, in a sense that they could not only produce better results in terms of accuracy, but they could also learn about the market themselves because we created a plot for elasticity curves. They could play with the price and see if they made the price too high, too low, and how much the order quantity would change.” –  Andrey Sharapov

Episode Transcript

Brian: I’m really excited to share my chat with Dr. Andrey Sharapov today from Lidl, the large grocery store chain from Europe. Andrey is a data scientist. I met him at Predictive Analytics World while we’re speaking there. He told me that he was quite interested in removing the black from the black box concept that goes with predictive models. This is called Explainable AI or Ex-AI. I think this has a lot of relevancy to designing good decision support tools and good analytics that people can believe in and will engage with.

There’s obviously been a lot of talk about this area whether it’s from a compliance perspective or an ethics perspective or just a customer, an end-user experience perspective. Being able to tell what models are doing and how they’re deriving their predictions has value on multiple different levels. Without getting super into the technical side of this, we’re going to talk about how explainability within machine learning and predictive models has relevance to the design and user experience of your software application. Here’s my chat with Andrey.

Hey, everyone. Welcome back to Experiencing Data. This is Brian and I’m happy to have Dr. Andrey Sharapov from Lidl. Did I say that right? Lidl, obviously, not obviously, maybe to our American listeners, people that don’t live in Europe. But is it Lee-del or Lie-del, the grocery store chain in Europe?

Andrey: It’s Lee-del. Hi, everyone!

Brian: Welcome to the show, Andrey. You’re a senior data scientist at Lidl. Tell us about your background. What are you doing with grocery data?

Andrey: Hi, Brian. Thanks for having me on the podcast. As you said, Lidl is one of the largest retailers in Europe. We have more than 10,000 stores. We obviously have quite a lot of data that we’re trying to put to work at the moment while building this data products. For instance, we try to create decision support tools in order to help our action planners or promotion planners to make better decisions. On the other side, we’re automating various processes like for instance order disposition which means ordering of goods automatically for all the stores. We have a lot of other use cases related to marketing and everything that has to do with business as Lidl more or less.

Brian: I’m excited to talk to you about [a]particular area of interest that you have which is explainable AI. But before we get into that, I’m curious, it sounds like [you’ve touched several different aspects of the business that Lidl with some ]of the data products that you’re creating. What’s hard about getting that right? Not so much from the technical standpoint, and the engineering, and the data science piece, but in terms of someone’s doing the purchasing of the carrots and someone’s doing the planning of the promotions, tell us about that experience and how you go about getting those people the information they need such that they’re willing to use your analytics and your decision support tools to actually make decisions. Can you to talk to us a little bit about that?

Andrey: Yeah, sure. As you’ve pointed out correctly, these days, it’s not really much about technology or data crunching but more about weaving together the relationship between data scientists and the business in order to get a buy-in from the actual users. Let me maybe just say a little bit about how we work on our first data product, the one that I created along with the team. We went through a lot of struggles, trying to figure out [what data scientists do or do any data engineers,],and then at some point, we’ve got product owners on the team, the people who actually talk to the business in the language that the business understands. We also got businesspeople onboard as advisors for the project.

The product that we built is called a planning tool, so to say. Every week, the operational people plan a promotion for certain date in the future. They have to pick a certain number of decisions, take into account conditions of the market weather, time of the season, and a lot of other things, then come up with the number to order. The sole purpose of the tool that we built was to make their work more efficient in a sense that they could, not only produce better results in terms of accuracy, but they could also learn about the market themselves because we created a certain clause for[ instance instant]elasticity curves and they could play with the price and see, if they make the price too high or too low, how much the order and quantity would change. That’s the main idea behind the product.

I guess most companies have the same problem of trying to onboard the business users. The main sort of idea or the main way of thinking, “Okay, we have AI in that.” They will search in the user, they’re like, “Okay, let’s just do it.” But most likely, it’s not that easy. We went through this experience of learning that, “Okay. Although we have the coolest algorithms in our system and from coolest people working for us as data scientists and engineers, [totally doesn’t that the final user ]??will use the product.” The way that we try to convince them was by building fancy user interface in terms of making it more beautiful, so to say, but nonetheless, they were not very convinced.

As far as I know, the operational people are really hard to convince because maybe the majority of operational people try to use such tools in order to execute certain tasks very fast. They don’t have a lot of time to try to learn what’s going on but rather, they would like to do a few clicks and the job is done and they move to something else.

In our case, since it was a lot of machine learning, a lot of predictions, there was this problem of trusting the system because although they could use it in this way that I just described—that they could just do some clicking and then complete the task within, I don’t know, 15 minutes maybe. But nonetheless they were hesitant to use it because there was this lack of trust to the system. They would question why the prediction is maybe a little bit higher than they expect or a little lower, and there was no way to explain it to them. You basically say, “Okay, it’s an algorithm.” This aspect was, I guess, quite crucial because in their mind they also have a certain number of rules that they follow when they do the planning. This is basically kind of an invite for the next question so to say about explainable AI that we try to show the end-users various types of explanations later in order to gain more trust from their side.

I guess, as I said at the beginning, this phrase that if we have AI in the data product then they will use it, “Let’s just build it.” Well, in the end we’ve learned that it’s of course not the case. People first were very skeptical but later we’ve tried to really work hand-in-hand with them trying to polish this single feature that they had in their mind. In this way, we’ve brought in more people who’ve tried the product.

Brian: You hit the nail in the head there with these technologies. Whatever is new, as we talk right now, AI is definitely high on the hype cycle, it’s not magic sauce. You don’t just stick it into the cake and then all of a sudden everything is solved. You still need to map these technologies to fit the tool into the way the customer wants to use it. In this case, the way the tool does your modeling, is it based on how, for example like a purchaser, all the factors that they were using whether it was of a calculator or some kind of a manual process in Excel, I imagine that they have some kind of a recipe that they would follow to do this prior to you doing any type of AI or machine learning to help with that decision support. Is that how you help get adoption to be higher? Is it modeled on that or were you looking at other data?

I’m curious especially around like if there’s experience or maybe—I don’t know if you’d call it biased—but there might be decision points that a human would be using in the traditional or the old way that they use to do those tasks that you can’t maybe perhaps integrate into the model. Does that make them not trust it as much even if maybe you’re actually factoring in more variables that they never use to like, “Oh, well, you never had weather when you forecasted crop and prices to figure out how much to purchase or whatever. You never had that. We actually provide that, but we don’t have last month’s purchase,” so you don’t know what the price was last month. That’s a bad example but are you following what I’m saying? Can you talk a little bit to that terms of adoption?

Andrey: Before we even started, people were able to plan different promotions for the last, I don’t know how many years, and the main tool that they used was Excel. People with a great number of years of experience, they put together all these datasets for themselves and they’ve developed a certain tactic for how to approach the planning. Don’t forget, Lidl is operating in about 30 countries and each country has its own secret sauce on how to do the planning.

As I said at the beginning, we had one person from one of the countries; one real planner who showed us how she does it for real, what factors they consider, which logic, and from then we tried to mimic the whole thing using machine learning algorithms. Of course, machine learning can take into account a lot more different factors than human planners. But nonetheless, the results that we got were of course slightly different because with human planners—I mean, it’s impossible to get the same number for a human and machine learning algorithm. They will always kind of say that, “Okay, why is it lower than I expect? I need to understand why.” The only answer we had at the time was, “Because the model said so.” That’s certainly not enough for them.

Just recently, we’ve developed quite a good relationship with Polish planners. They have similar concerns, so they tried the tool, they like to tool, but again, for them it’s somewhat hard to start trusting the system immediately. They try to plan some promotions and it was fine for them but for some of them, they got unexpected results. You hit the wall; they start asking you all sorts of contrast-effects explanations. “What if it was different then what would have happened?” Without explainable AI or any kind of explanation of machine learning, you just cannot go forward with them. This is how I would put it.

Brian: I do want to get into the explainable AI piece. I’m curious though, you mentioned Lidl [is]in 30 countries. For example, the purchasing department here, are there really 30 different recipes that are valid and/or there are cultural distinctions in the way stock is purchased for stores in Germany versus Poland or is it that just these Heuristic models for doing these were kind of organically-borne in different ways in each place ? Are you guys trying to centralize that or are you creating unique models? Are you trying to map it on like, “In order to get the Polish buyer to trust us, we need to show that our model is based on the way they were doing it even if it’s a different model than we use in Germany or Italy.” Can you talk a little bit about how you make those decisions and how you keep that trust that people are still going to use these decision support tools?

Andrey: As I said at the beginning, each country has a small feature when they do the planning of promotions. Lidl is currently developing various tools in order to standardize this process, but it is an ongoing thing. Like with any person, be it the promotion planner, or a bank teller, or whatever, people tend to oppose changes; they just don’t buy these many cases. It takes quite a lot of time in order to convince them that, “Okay, this is the right way to do things. We’re suggesting [you a new way ]??that is not really new, it’s just kind of more generic,” so to say. Nonetheless, they [would..] say that, “Okay. We have this one data point that must be everywhere otherwise we just don’t take it.”

It’s really a matter of time, matter of interaction with the client trying to convince them that if it’s so important then we can build it into the tool, then they will go along, and they will buy it. Or on the other hand, try to convince them that that is not important then maybe they, at some point, they will get convinced. These are all the different types of things that you have to discuss with the customer one-on-one. We actually do a lot of traveling these days, going to Bulgaria, going to Poland, Hungary. Every country, we try to talk to these people directly. Try to get the requirements directly from them and then show again the results back to them and say, “Okay, we did it for you specifically so let’s work together.”

Brian: I think that it’s great that you’re going out and doing that one-on-one research with your customers because that’s another way to just build support is when they feel they’re being included in the process and you’re not imposing a tool, but you’re actually modeling the tool based on them that’s another way to increase engagement. I’m curious, do you find that the unique countries, the managers, whoever ultimately makes the decision on what tools or what model is going to be used to make the final decisions, are they interested in like, “Oh, look. Italy factors in the weather. They factor in this thing that we never thought to do. We never really gave it that much weight. Maybe we should do it that way?”

[Is it 30 independent recipes and then you guys have been generalizing that based on the variables that you find have the most impact on the quality of the predictions? ]??Is that shared and the countries are aware of what each other are doing or is it more like, “Yeah, yeah. That’s nice but Poland is different. We want to do it this way and we know it’s right.”

Andrey: It depends. Sometimes there are certain legal things that we have to take into account that are not quite transferable across different countries, so these things, we just cannot take them into account because then the tool becomes too specific for each country. But on the other side of things that we are able to generalize we just do it simply by trying to get more data from countries or blind gate or something like that.

Brian: Talk to me about how explainable AI, this technology, [is]for people that don’t know what this is. Effectively, what we’re talking about when you’re doing things like showing a prediction from a model, it’s actually showing what some of the criteria were and perhaps how they were outweighed and how they had an impact on the conclusion that was derived by the system. Is that a fair summary of what explainable AI is? Why don’t you tell us instead of me trying to do it since this is a space that you’re interested in.

Andrey: Absolutely. The area that use, kind of a research area of explainable AI, it’s all about trying to understand the reasoning of a black box model. This is not a new idea. It was quite popular back in the ‘90s but then was somehow forgotten and it resurfaced back in 2016, 2017 when [?? I think it’s a name of a company??].] announced a lot of funding for this area of explainable AI.

What it actually does is for instance, let’s say you have a data scientist working on a sophisticated model, whether that be a neural network or anything else, and then it produces a prediction which is just a single number or it’s a binary decision, yes or no. In many cases, these black box models are really hard to explain to a non-expert. Even data scientist, in many cases, don’t know why it predicted yes versus no. There is no clear explanation or human-readable explanation that can be delivered in this case.

Unless, the whole area of research of explainable AI is trying to, first of all, come up with the whole philosophy of what an explanation really is and this is not a done deal I would say. People are still trying to understand what it really means. The second part is, “Okay, how do we generate something that a human being can understand?” Whether it be, I don’t know, some factors, “Okay, for this prediction, Factor A played the biggest role and then Factor B played somewhat a lesser role,” and so on and so on. Or even generate a sequence of rules, if [??.] rules such that, “Air temperature is higher than 30 degrees, it’s the middle of the day, then the prediction for the sales of ice cream would be high.”

What I’m trying to say here is that we use this technique in order to interact with our customers. For instance, when you go and talk to an operational person, a person who works in operations, who has in mind a certain number of basic rules; three, five, six rules when doing a planning. And then when you come to him with a machine learning model, something that is say, a black box, and then you tell him, “Okay. Just trust my prediction.” In most of the cases, it just simply doesn’t work. They just don’t trust it. But the moment when you come with an explanation for every single prediction your model does, you are increasing your chances of a mutual conversation between this responsible person and the model in this case.

For instance, if the model predicts sales one, two, three, four, five for the May-June and then it says, “Why is it one, two, three, four, five?” And then you say, “Okay. It’s because if regular sales in June are greater than two, three, four. It’s May or June and something else, something else.” And then this person can relate these statements to something that he has in mind when doing plenty himself. This is where this kind of the eureka happens so to say because they see that the model is reasoning in a similar way as they do it. This way, the level of trust certainly goes up and then they’re willing to try it even more.

I’m aware of similar stories. For instance, Yandex has been in the business of building similar tools for their customers and they also have explanation modules. It’s not a kind of a one-shot thing that we do at Lidl but it’s gaining quite a lot of momentum, I guess.

Brian: I think that’s natural that trust and engagement is likely to go up if you have this in place because as you said, people can see that the tool is modeled on the work and the tasks that they want to do, and it’s not imposing a magic answer. It’s kind of like saying, “Hey, none of you here are experienced for the last 10 years of running promotions at Lidl matters anymore. Here’s what you should do. Here’s the product. Here’s the sale price and how long you should run it for.” I think it’s just human nature, there’s a natural tendency to not want to trust that, “Well, my whole job and these activities I do are completely replaceable by a magic box.” But when people start to see how it’s actually a decision support, I think it’s natural that the trust goes up.

Although having said, I’m curious, you’re using some of these, would you say this is a regular ingredient in the products that you bake-up at Lidl? The data products that you guys are working on these tools[ or this is or is this an ]occasional thing? Why or why not would it be included on everything if it’s possible?

Andrey: Well, there are different cases. Certainly, explainable AI is not something that you should or must use on every situation but I’m a great believer in decision support tools and human in the loop applications. That’s not necessarily in retail but in general. Every time where people have to look at certain predictions, we try to come up with explanations or at least some sort of a strategy for, “How can we come up with these explanations?” On the other side, these techniques are very useful when you do debugging of machine learning models. Even if you are not planning to show these explanations to anybody in the business, you are still benefiting quite a lot when you’re actually developing a machine learning model by using these tools, sort of just to fill in a few words. If you can avoid all sorts of overfitting in the model or removing plenty of features that actually make a model unstable and so on.

I think the main point here is that in order to build things, models, what we don’t really understand a lot of what is really going on inside when they make predictions. It would be really sad, that’s the main idea.

Brian: I haven’t thought of it that way, but I could see how, even as a debug, it could be useful as you’re trying to improve the quality of the decision; the advice that the tool is generating. I don’t know if you have data or even if it’s just qualitative in nature, but having included this on any of the products that are at Lidl, do you find people trust, like once they’ve seen that the model has some explainability behind the predictions that are being made, do they tend to still pay attention to all of that going forward or is it more like, “Oh, I can see that Andrey and his team factored in last month’s purchase data plus your competitor data and that’s what I always do. Now that I know that he always does that, I don’t really need to see that every time. I’m not going to second guess as much as I used to. I’m going to trust that now going forward.” Or do you find that the explainable AI, that portion of the UI, is actually an integral part of using the tool every time? Are you following what I’m saying? Do you start to ignore that over time or do customers see that as an ongoing, useful aspect of the interface?

Andrey: We don’t have the explainability to build in into the user interface at the moment. We have it as more of a PoC, we show it on the month more or less but we don’t have it as a default feature. Certainly, this should be working in a way that you described. I actually have read a few papers recently about the effect of explainability. People were tested through, I don’t remember exactly the test set-up, but the point that the researchers were making was that the accuracy of predictions within this realm of human in the loop applications does not go up. Whenever people are using machine learning model that makes a prediction yes versus no, it—altogether the performance of this blend of human and machine—does not go higher but [well-significantly ]??that they could prove it. But the trust into the system can beat those by 20% higher than without any explanation.

I guess what you said is exactly the point of building in explainable AI into any tool to make it transparent and then at some point, people trusted them, they don’t really have to check these explanations every single time because they know, “Okay, we are on the same page; machine and me.” Unless, they’re trying to explore some unusual situation where they really want to test the system or learn something from that, because well, this is also a possibility for a human in the loop application when humans actually learn something new from the system. I think this another case when they would use it occasionally after that.

Brian: Without getting too technical here, obviously, Experiencing Data, this podcast, is more about customer experience with data products and that type of thing, but I’m curious there maybe listeners wondering, “Hey, I have this decision support tool or this analytics deployment that’s starting to use some machine learning, it doesn’t have any type of explainability. We are seeing the same symptoms you talked about like, low engagement, people don’t trust it, we put a significant data science investment in place, is it easy to retrofit in a technology investment you may have made to include some of this such that you might be able to start to improve the trust factor or is this something that really needs to be implemented from the start and it’s much more difficult to put in after the fact?

Andrey: It all depends on the system itself right now. How many models you have there? What is the complexity of the whole thing? Technically speaking, there are already a number of libraries available for doing these things. Everything is open sourced. You can just Google for words like Lime or Shep or Anchore or contact me on LinkedIn for instance. I could point you out at various sources, but the point of explainable AI is that these tools are sort of model agnostic in a sense that they just need a prediction and they need just a model, and the predictions that model produces, and that’s pretty much all. Then you can write a few lines of Python and then it’s there. You can get your explanations. The short answer is you don’t have to invest too much into getting explanations if you already have a working system.

Brian: Wow. That’s kind of a little bit mind-boggling. If you hear in terms of, “Why isn’t this being used more? Is there a perception that it’s costly, or it’s more difficult than it is, or is the quality not there, or do you think the business and the leaders and the people doing this don’t think it’s necessary?” It fascinates me if it’s that simple and of the quality, if this is such an easy way to build trust, why is this not happening more? Did something happen in the night between the ‘90s and now that this kind of fell out of trend? Can you talk to that a little bit?

Andrey: In my personal opinion, I think over the recent 10 years, data scientist put too much attention into getting high model performance in terms of accuracy or lower error. This whole trend of toggle competitions where you try to build super accurate model and say the opponent who gets the first price is probably hundredth of a percent more precise than you, but the main question is, “Does it actually make sense? Is it what business really wants?” In my opinion is, it’s not the case.

Yeah. For scientific breakthrough, maybe yes, it is useful. But certainly, in order to gain trust, you cannot just build more and more sophisticated high-precision model. It just leads you nowhere. The other thing is that over the last four or five years, the deep learning hype took place and a lot of attention was in the deep learning area where people were all in doing neural networks and nobody really cared about explainability. It’s more of, “Okay, let’s just predict this to 99.9% of accuracy.” At some point, some executives realized, “Okay, we have a lot of these models and we have no idea what is really going on inside.

As I mentioned, the DAPPER program and also GDPR regulation that came to light in May 2018, put the spotlight on the right to explainability, the right to explanations, all of these factors together propelled the explainable AI topic forward and it’s now gaining a lot more attention than before.

Brian: In terms of when you get into neural nets and some of those technologies, is this technology available to products that are leveraging neural networks and some of this more complicated artificial intelligence technologies? Is it widely available to add the explainability portion?

Andrey: Well, it depends on what kind of data you’re working with. If you’re working with this regular table or data, the data that is in tables, no text, no images, then it doesn’t really matter. You can take in neural net or any other model but once you go into more sophisticated realm of neural nets working with text data, then it is slightly more complicated to get it to work but it’s still possible. They could just use lime or shep. It’s very interesting what they do actually. For instance, if you try to say classify legal documents or medical documents or say fake news classifications and write yes versus no, then these explainability tools can highlight the actual words in the sentence that play the most role in terms of, “Okay, if it’s fake news, then it will underline certain words,” that a human being can get them on and say, “Okay, this content is maybe more full of feelings or calling for more action,” or something like that versus a prediction for no fake, and it’s mostly facts, facts, facts. Basically, the explainability tools, they highlight words in a sentence.

In terms of images, even more complicated. There are a lot of things that are model agnostic but even more things that are not. In this case, you really have to be an expert in whatever neural net and try to get it to work. Just get a code from GitHub and try to reproduce the results of a research paper. For these more complicated cases, it’s not that easy but it’s possible.

Brian: You brought up something too. I’ve heard this trend repeated too that there can be a tendency in the data science community to want to do excellent data science work and not necessarily do excellent business work—building tools and solutions that help the business. I could see how maybe some data scientists may see that’s not going to improve the model adding the explainability. “I can’t write a paper on that and build my credentials with that type of information,” so maybe that’s responsible for why it’s not as widely in use. Do you think that’s going to change? Do you think this will become more of an expectation going forward that, we won’t be talking about black boxes as much in a year from now or two years from now, do you think that’ll start to go away and expectations will change? Any thoughts on that?

Andrey: The whole trend is going into the direction of explainable AI anyway for one simple reason. In the last years, AI was mostly used in the labs and probably to automate certain processes where no humans are really involved. It’s more like robotics or something like that. But these days, AI is going into various fields like healthcare, or legal domains where you deal with things that affect humans directly.

For instance, how would you explain why a certain person didn’t get a low at the bank and the other one who looks very much similar got it, right. There are a lot of questions that are coming up these days because AI is touching upon some points where humans are personally involved. Because yeah, we don’t really care how some robots are moving goods at Alibaba warehouse. I mean, doing an explainability for that, yeah, maybe it’s a really sophisticated model but I don’t care. I get my goods; I order them and that’s it. But whenever things go into the direction of some social interaction or things that affect people directly, or say these high-stake decisions, then interpretability and explainability is a must.

I think many people would probably choose a model that is maybe not as exact and accurate but explainable versus something that is extremely accurate but okay, sometimes it can kill you. This is kind of my logic here.

Brian: That’s kind of the dividing line between the business and the human side of it. The pure data science side is that you might have a super accurate model but if you find out that they’re still buying carrots the old way, does it really matter that you have an excellent prediction on how many carrots to buy or at what price if they don’t ever take advantage of that. All of that investment is kind of thrown out the door. From a business standpoint, an acceptable model quality with a highly trusted interface and user experience might be the better business decision even if it’s not the best model quality from that perspective. I think that’s important stuff to consider in all of this as we build these tools.

This is awesome. It’s actually exciting from a design perspective to hear that this is available as kind of a tool that we can implement. Obviously, context matter,and you have to look at particular domains and particular types of data and all of that but it sounds like something that—as part of our tool box—it’s something that we should be leveraging regularly when possible especially if we’re talking about human in the loop tools and decision support tools as opposed to as you said, the Alibaba robots…

Andrey: Yeah. That’s [ I didn’t hear a word here].

Brian: …get my book or my shoes or whatever I ordered. Just on this topic here as we wrap up here, any broad level advice for people that are kind of looking to jump into this on making efficient data products? It doesn’t have to be just products explainable AI but just from you experience at Lidl. Maybe a mistake that you’ve realized, and you changed, and how you’re approaching your projects and building these tools. Any advice for people?

Andrey: I guess the only advice I would give to anyone who wants to build a product that is used is to go to the people and ask for the actual requirements; try to involve the end-user at the earliest stage possible. I think this is the only way to succeed in the end. This is how startup fail or succeed. You have to really understand what you’re doing. At Lidl, we’ve kind of cleared the way from zero to hero over the last two years and we learned it the hard way. The more you interact, I mean it doesn’t really matter to the people because data product is a piece of software that has machine learning inside of it or algorithms in the end. Nobody really cares about how sophisticated these algorithms are. People just want to make sure that they can get the job done efficiently or have a nice experience, no bugs, and stuff like that.

This is [all the same stories that was probably 10 years ago this is all the same story as 10 years ago probably]when we didn’t build any data products,[ that build]?? just regular software but again the main advice is just don’t try to build or create this ‘moon-shotty’ product within a few months and try to reiterate and try to onboard your users as early as possible. This is the main advice that I could give. Of course, use explainable AI in order to convince and gain trust. This is a must in my view. I mean, of course in the cases where someone is doing something, and no one can see him. If it’s an interactive tool, interacting with users, they have to be sure that it is doing the right thing.

Brian: Great advice. People that have been on my designing products mailing list and the Experiencing Data podcast have definitely heard this advice beaten into the ground many times about getting out there and talking to people early in the process to inform what you’re doing and not working in isolation because that’s almost a sure way to produce something that people aren’t going to use because it’s full of your own bias about how things should be done, and it’s not informed by what customer wants to do. Good words, good parting advice. Where can people find out more about you? Are you on Twitter? Do you have a website, LinkedIn, anything like that?

Andrey: Well, I’m posting quite a lot on LinkedIn. I have a group dedicated to explainable AI; it’s called Explainable AI-XAI. Everyone who’s interested or learning more, feel free to join or contact me through LinkedIn. I don’t tweet much on Twitter. My presence is mostly on LinkedIn at the moment.

Brian: Great. I’ll definitely put those links to your profile and to the Explainable AI group on LinkedIn on the show notes. This has been really great, Andrey. It’s been great to talk to you. Thanks for coming on in Experiencing Data.

Andrey: Thank you, Brian. Thanks for inviting me. 

Other Episodes

Episode 0

June 16, 2020 00:44:05
Episode Cover

041 - Data Thinking: An Approach to Using Design Thinking to Maximize the Effectiveness of Data Science and Analytics with Martin Szugat of Datentreiber

The job of many internally-facing data scientists in business settings is to discover,explore, interpret, and share data, turning it into actionable insight that can...

Listen

Episode 0

August 25, 2020 00:51:18
Episode Cover

046 - How Steelcase’s Data Science, UX, & Product Teams Are Helping Customers Design Safer Office Workplaces Informed by Covid-19 Recommendations w/ Jorge Lozano

When you think of Steelcase, their office furniture probably comes to mind. However, Steelcase is much more than just a manufacturer of office equipment....

Listen

Episode 0

June 04, 2019 00:41:38
Episode Cover

014 - How Worthington Industries Makes Predictive Analytics Useful from the Steel Mill Floor to the Corner Office with Dr. Stephen Bartos

Today we are joined by the analytics “man of steel,” Steve Bartos, the Manager of the Predictive Analytics team in the steel processing division...

Listen