049 - CxO & Digital Transformation Focus: (10) Reasons Users Can’t or Won’t Use Your Team’s ML/AI-Driven Software and Analytics Applications

October 06, 2020 00:40:50
049 - CxO & Digital Transformation Focus: (10) Reasons Users Can’t or Won’t Use Your Team’s ML/AI-Driven Software and Analytics Applications
Experiencing Data with Brian T. O'Neill
049 - CxO & Digital Transformation Focus: (10) Reasons Users Can’t or Won’t Use Your Team’s ML/AI-Driven Software and Analytics Applications

Oct 06 2020 | 00:40:50

/

Show Notes

Watch the Free Webinar Related to this Episode

I went depth about how to address the challenges in this episode on Oct 9, 2020. About 30 Mins + Q/A  time.

Watch Now

Welcome back for another solo episode of Experiencing Data. Today, I am primarily focusing on addressing the non-digital natives out there who are trying to use AI/ML in innovative ways, whether  through custom software applications and data products, or as a means to add new forms of predictive intelligence to existing digital experiences.

Many non-digital native companies today tend to approach software as a technical “thing” that needs to get built, and neglect to consider the humans who will actually use it — resulting in a lack of business or organizational value emerging. While my focus will be on the design and user experience aspects that tend to impede adoption and the realization of business value, I will also talk about some organizational blockers related to how intelligent software is created that can also derail a successful digital transformation efforts.

These aren’t the only 10 non-technical reasons an intelligent application or decision support solution might fail, but they are 10 that you can and should be addressing—now—if the success of your technology is dependent on the humans in the loop actually adopting your software, and changing their current behavior.

Links

Transcript

Brian: Welcome back to Experiencing Data. This is Brian T. O’Neill, and I’m going to be rolling another solo episode today, this time focused on CXOs. I’m calling this episode CXO Focus: Ten Reasons Customers Don’t Use or Value Your Team’s New Machine Learning or AI-Driven Software Application. Could actually say software applications here since I know a lot of you are working on multiple projects, products, models at the same time.

Today’s episode is really for people who consider themselves non-digital-natives, or working at non-digital-native companies who are trying to use AI and machine learning in innovative ways inside their business, primarily through the use of custom software applications or embedding new forms of predictive intelligence with AI and machine learning into digital experiences that will be used either by employees, or even by customers, or partners, suppliers, et cetera. The truth is that many non-digital-native companies approach software as a technical thing that needs to get built without as much focus on the humans who will actually use it, and what they need in order for any business or organizational value to emerge. This gets even more complicated when the software is intended to be intelligent, and you’re integrating data science and analytics into the user experience either in the background, primarily, or more in the foreground of that experience.

And now let’s get into the ten reasons—top ten reasons, at least, customers don’t use or value your team’s new ML or AI-driven software applications. Number one, usability. What does this mean? Well, I have a couple different things that I put under the usability category. So, these can range from, it’s too hard to use; it requires explanation to new employees, customers, and users—and what I mean by that is, not only do you not want to have to explain the tooling and the applications to the current employees, and users, and customers you have, you don’t want to have it such that if they leave the company, you then have to retrain a whole bunch of new people how to do it. So, you have to think down the road as well, not just with the current people you have, but also to the future. Usability also refers to requiring major behavioral changes to people and processes that were not considered critical to the success of the actual technology piece.

Another aspect is too much information or not enough information. So, this gets to the right amount of information density. There tends to be—I feel like the pendulum swung from way too much—which is really about poorly designed, and not so much information overload—to information deserts. These clean, heavily whitespaced dashboards with a few donut charts on them and a bunch of really big numbers with not a lot of comparisons, stuff like that. So, density is important here.

Explicit versus implicit conclusions in the data. This is another usability aspect. So, this gets to how much you’re putting the onus on the customer, or the user to be able to make a decision themselves implicitly through the information versus giving them prescriptions or different choices that the data actually suggests they should proceed with.

And the final thing on usability is, you aren’t even measuring usability ever, let alone routinely, and routing that information back into your product development cycle. And so, by product here I mean really whatever the software application or thing you’re building. And we’re going to talk more about the concept of product. Even if you’re not actually selling any commercial software or a SaaS, we’re going to get into that thing about product mindset later.

Number two is utility, which is different than usability. So, utility for me is, “I understand how to use the software that’s in front of me, but I don’t care. I just don’t see any value in it.” And again, I’m voice of your customer right now, right? So, they can understand what the buttons do, they understand what’s in front of them, but it’s not valuable to me, it doesn’t answer a question they have. So, you can also think about utility as the solution in search of a problem. And this often happens when there hasn’t been sufficient one-on-one user research to fully understand the problem space, the context of use, and the environment in which the software is going to be used by the humans in the loop. So, that’s utility.

Third one here is trust. So, we hear about this a lot in the context of things like ethics, and all of that. I want to talk about trust in a few different ways, “I can’t understand if—” this is, again, me in the voice of your customer—“I can’t understand if it makes predictions the way I used to.” So, this gets to something where perhaps you’re swapping out a traditional way of decision making with AI, and the customer using this can’t understand whether or not the checks and the things that—the different decision points they use to make an overall larger decision, is the system doing that? If they can’t tell that, that’s a trust issue.

Another one is data privacy issues, which many of you know about. And this can kind of get to the heart of, “I don’t know how my information is going to potentially be used against me.” This is probably more important in applications and systems where there’s both input and output, so the user is actually putting some data back into the system that may be personal information of some sort. But that’s a factor, is being transparent about how that’s used. It’s not just literally being transparent, and, like, putting footer copy in tiny grey text that was approved by the risk department or whatever.

People can sense when you’re being genuine with this or not, and if you try to mask this behind terms of service, or something like that, it’s pretty clear you’re saying, “There’s stuff in here I don’t really think you’ll want to read or we don’t really want you to read, and we’re going to make it really difficult for you to understand it because we’re doing some stuff that’s kind of questionable.” That’s what tiny little gray text that’s chock full of legal language can state. So, give it some thought with your approach. Again, this is probably less critical for employee situations where you’re using internal tooling, but it is something to think about.

Another aspect of trust is what are the ramifications on me, the user for making the wrong choice? So, if the suggestion I get from the tool is wrong—this gets to the, you have your true positives, false positives, true negatives, false negatives, all that—what is the ramification for me making the wrong choice? That’s something that sometimes is an issue, sometimes it’s not; it really depends on the context, but it’s something that the design of that situation should be thought out so the customer can understand the context of what’s happening, what might happen to them, or what might happen downstream from them as a result of that.

Another aspect of trust is the ugliness: the aesthetics. So, I know this one probably sounds a little bit strange here. But the short of it is that the way things look, the presentation, the aesthetics, the look and feel, there’s lots of different words for this, this aspect of design—which is what a lot of people tend to think of because it’s the most visual, it’s the most surface-y part of design—does matter for this kind of stuff. If your thing looks really ugly and crude, and it looks like an engineering prototype, it suggests that the system—the code, the stuff behind it—that the work that went into it is also ugly and crude and potentially not stable.

So, this is a very hard line, I think, for someone who’s a non-designer to draw as to when is it sufficient. You may or may not hear people comment on this out loud. If you do hear people mentioning it, then that’s probably very obvious that there’s a problem. So, it’s a sliding scale. It’s, kind of a—no pun intended—it’s a grayscale kind of thing where, how much treatment and how much visual love does it need to get?

That’s something that probably only a designer is going to be able to help you out with, but I can tell you that it matters in terms of building the trust. And people are more willing to struggle with software and applications that look well-polished. There is a study about this, so there’s some numbers and facts behind it if you really need that, but in short, the looks matter. So, it’s something that you need to be thinking about.

And finally, expectations were not set or are ambiguous about what the intelligence, the AI here, can or cannot do. So, a good example of this is something like a chatbot for—since we all have to use the shorthand here—and not really understanding the types of questions and the scope of what the system can actually answer or not. If you just give people this open slate and they expect that it can do anything, there’s a good chance with a few swings at the plate, they may find that it falls short of their expectations, and the next thing you know, all they’re trying to do is figure out how to bypass the system. And that can actually create some real frustration where people are just really annoyed with the technology.

So, what while you may think that you did a great job with text and natural language processing, and you have all these models in play, and it’s in production, and all this kind of stuff, the trust may not be there because the expectations about the intelligence are mismatched between the reality of what the system is able to do. So, that’s another trust aspect.

Number four, lack of empathy for the customer. So, you know, I heard a quote from a customer who was talking to me about trying to bring a product mindset into his analytics organization, and he was saying, kind of jokingly, how he has some staff members who have kind of stepped up from technical roles into acting product management roles, and there’s still this attitude that our job would be so much easier if we didn’t have customers. And this kind of mindset is not good. If these people are the frontline decision-makers about what your customers and users are going to interact with, that’s troubling.

Empathy matters. And I know it can sound like this kind of squishy, emotional thing. It really matters if you want to do a good job with this stuff. If you really want to drive adoption, you have to care about this. And not everyone is probably wired to want to go out and do this kind of work. You can train people in this space, but the first part is that you have to decide that it matters.

Examples of lack of empathy, to me, are your team has never watched the target user that you’re supporting—perhaps another employee—do their job, their work, the activities they do every day to the point that they can really empathize with that customer, they can put themselves in that shoes, when they’re back at their desk doing code, or whatever they’re doing. They have a deep sense of what it’s like to be that person, whether you’re in payroll, or accounting, or some other department there, they’ve done enough shadowing, that they can take themselves out of the equation when they’re thinking about design choices, and put that other person’s hat on, and try to make design choices that are based on what’s right for the other person. So, this is the act of avoiding self-reflection, and this is a core tenet of really being a design thinker, or just being a designer, even if you’re not going to become a professional designer, this act of empathy is really important.

But another, kind of, idea under this topic of lack of empathy is, there’s little or no routine access to customers to validate the choices. So, if you don’t have the relationships set up and you don’t have buy-in from executives that, for example, if you’re going to be helping procurement with some type of AI or machine learning application, and procurement doesn’t know about the work that you’re doing or doesn’t care, or their leadership’s not bought in, you’re going to have a hard time getting access to those people because they’re going to probably perceive it as a tax on their time unless they’ve requested this service and they feel like it’s in their value to participate. So, it’s really important to be opening up these dialogues, and these channels, and routine access, and having what we sometimes call design partners onboard. And these are people that you can routinely tap for quick phone calls, email reviews, stuff like that, who can help your team validate the choices they’re making along the way.

And the final one here would be the lack of prototyping or a fast fail approach. So, I realize the timelines, especially when you’re building models and stuff like this, you know, Agile and some of these things don’t work particularly well if you have to build out a lot of plumbing just to get to the point where you can start doing some predictive modeling and all that. But I do think there’s ways to do prototyping here that I don’t hear talked about too much. Prototyping, meaning creating something very low-fidelity, probably not even any code, that can go out and tease out what would be some of the points of failure with this new application that we can test with customers today such that we can plan for that now, and not wait until we’ve built out what we think is a great solution only to find out that it has some major blockers in it that we didn’t consider.

So, that’s all under the theme of lack of empathy. So, the next one here is elastic success criteria. So, this gets to, “The project is taking so long that nobody really knows what success to the business or the customer looks like, or how to measure it.” Sometimes with these really long projects, people lose sight of what the big picture is and what success is going to be defined by and how to measure it.

This isn’t a great way to work because what happens is, I think, we all focus on kind of the nearest hill and getting over the nearest hill, and we focus on things that are easy to measure: code check-ins, or meeting the sprint deadlines, or whatever it may be. And if you don’t routinely have a way to validate yourself against the, whether it’s progress metrics or actual success metrics, you need something to be accountable to and people need to understand—the team needs to understand that ultimately, that’s what it’s about. It’s not about creating the model or the thing. That may be their individual responsibility, but an understanding—someone needs to own the idea that the output of the technology effort is not the success, it’s the outcome that’s enabled by it. And we’re going to talk a little bit more about that in a minute. And actually, right here, I’m going to take a quick short break, and we’ll be right back with the final five.

All right, we’re back with number six here: leadership and skills gaps. So, this is where I hear teams approach new software applications as data and technology projects, or worse it’s a ticket in JIRA or some ticketing system. And it’s looked at as a one-time thing, and you throw this thing back over the wall, and then you’re off to the next ticket. The alternative to that is thinking about these machine learning and AI—especially the projects that specifically involve routine use with humans in the loop—is thinking about them as human-centered data products—not projects—led by a skilled product manager

So, who should be the product manager? I have no idea at your organization who that should be. It may be someone that you already have on staff. I do know there’s a lot of talk that data scientists are being asked to do too many things, wear too many hats, to be these huge generalists; they may not be the right person to be the product manager, but the idea here is, if you think about a successful software initiative as being tied specifically to business and customer outcomes, you need a lot of different stuff. You need to be aware of the design and user experience piece, you need to be aware of the engineering piece, you need to understand the data, the analytics, the data science pieces of that, you need to manage that team’s overall backlog and the focus of what they’re going to be producing.

Whose decision is it to say what, “Yes, we’ll get a model increase of eight percent if we include these additional features from this pipeline which doesn’t exist—” whose decision is it to say, “When should we spend the extra two months to build out that extra eight percent model improvement, with all the labor that comes with that, versus fixing some other aspect of it that we know is currently broken right now?” You need someone whose job it is to focus on that value, and they’re looking at all the facets here. This is what product managers do. And they’re also managing up to the business and the leadership.

It’s a role that I think is missing a lot; it’s a foundational role at tech companies, you simply don’t—not have product managers: it’s table stakes these days, and I think that the role of the data product manager is one that a lot of groups are missing. And there is a whole presentation I recently gave. If you come over to my website, go to the speaking page, you can see a talk I gave with the International Institute for Analytics about the missing role of the data product manager as filling this gap in the data science and intelligence software space. So, check that out.

The other aspect here, of course, is—and I kind of mentioned this—was designers and the role of product or user experience design within the context of intelligent software. So, when and why do you need this? Well, again, if human adoption, usability, utility, and experience matter, and you actually want to make sure that the technology that you created—the models, the predictive power, all these kinds of things—actually gets used and put to work in the business, and is used by the people that it was intended for, this is what user experience designers are really good at doing. And, by extension, that can include user interface work as well. You may or may not really need data visualization specialization, it depends on the type of work you’re doing. But data vis is really a subset of user experience and product design, in my opinion. Data vis isn’t going to fix all the issues here with adoption, and the research that UX people put into going to figure out the emotional reasons people will and won’t use products, the bridges we need to connect different pieces of software or offline experiences together. They’re thinking about experience as a whole, as an enabler of whatever the business outcomes are that you’re looking for.

And typically in a tech company product management, and product design are tied at the hip, often with the third hip being an engineering or technical lead. This is what I call the ‘power trio.’ I think it’s really important to have all three of those aligned. They all kind of give and take; they push and pull against each other. The designer is, like, constantly advocating for what’s right for the customer, the product manager is trying to make sure there’s overall business value, and that the data scientist or software architect, or whoever the lead technical person is, is thinking about what’s doable? What can we get done in the time that’s here? Is this the right way to do it? All those implementation factors

So, I do think there’s a large shortage of this, and we don’t see a lot of people doing the research that I think would really help get more of these AI products and machine learning applications into production, or quote, “Operationalized.” I don’t like that word, but I think this would be easier if we were involving more product design and user experience professionals in that loop. So, those are the leadership and skill gaps there: product managers, and designers in particular. I’m assuming the people listening to this, you already know what the technology people you need are in terms of your data scientists and your analytics subject matter experts, analysts, data engineers, all the technical side, but sometimes I think people don’t really understand, what are these other roles for? What pains and problems do they serve within your organization?

I’m seeing this change; I’m seeing more and more talk about the product mindset. I talked about it with Karim Lakhani—who wrote the Competing in the Age of AI text—recently, so you can go back a couple episodes and listen to that, and we talked about these two things very specifically there.

So, number seven, there is no intentionally designed onboarding or honeymoon user experience. So, what do I mean by that? Well, I think we all probably know what onboarding is; if you’ve ever bought an iPhone or an Android or whatever your device is, and from right out of the box, you turn it on, you don’t just usually see a phone dial pad. There’s usually some type of whole setup process, and they walk you through what the experience is going to be, and they explain to you, “What is this? Why might you want to turn on this feature?” And they ask you to do it, et cetera. It’s a guided experience.

Not all software applications need all that setup, but the point is, did you actually give the requisite amount of thought to figure out, “Do we actually need to have some type of intentionally designed onboarding experience here? What is the risk if we just drop people into this application with no hand-holding?” And again, this is a time to think to those future employees, right. When they come in, what’s that experience going to be like the first time you log into the system and application and use this thing? You don’t want to find out you did all this great data science and technical work that went into a digital product, only to find out no one ever gets to it because, you know, they’re supposed to use a special credential, and it doesn’t use the corporate sign-on, and they’re lost in this loop, and they don’t get the email for their account, and da, da, da, on and on.

There’s a million things that you probably wouldn’t think are really part of the overall data product work that can end up blocking all of that value that you’ve created. So, you got to think about the onboarding piece. And you also need to think about the next phase, which is after the setup or the onboarding. And I call this the honeymoon user experience. And this is about thinking about, what is that first couple weeks—this could be days, it could be weeks, it could be months—but thinking about what the early life cycle of a customer using this thing is.

This might be really important if, say, you’re transitioning people from, quote, “The old way to do it,” to, “The new way to do it.” So, if this person used tool X in their past job, and now they’re using this intelligent tool Y, it may take time for them to transition over. Another reason here, may be, like, if the model needs to be trained on data they’re going to provide, well, it might take a while before either it’s accurate, or that it has enough information. And so you need to think about, “Well, if they don’t get any value out of this until they’ve walked around with their phone for 14 days, what is that experience going to be like over those 14 days?”

From an engineering perspective or a technical perspective, you could say it’s technically accurate. Like, in 15 days this application will work, but they may never get to 15 days because they might decide not ever to log back into the tool because they don’t even remember it exists because there was never any notification, for example, that came out telling them that, you know what, we’ve now collected this data. We can now predict the following things, please log in now. No one ever thought through any of that because it wasn’t really part of the modeling or the intelligence piece, it was really part of the user experience piece. So, don’t forget about the honeymoon experience. This can also include the transition period from, “The old way” to, “The new way.” And that probably needs to be designed as well.

And if you’ve ever been through a really elegant software upgrade or something where you’re switching from one platform to another, this can be a real delighter for your customers. And it’s a great way to build initial trust, when that feeling of, you know, “We know where you’ve been. We know how you were doing it in the past. This is the new way to do it. Here are the values. We’re going to handhold you through the entire process, whatever it may be.”

That’s a great way to build trust and to actually buy you some credits, especially if, maybe, the early honeymoon experience with your intelligence software may not be great out of the box, maybe it takes time before it can really show the value. Well, you might buy yourself some credits by making that transition period so good. And when we talk about transition periods, even with modern software, you’ll see this sometimes with SaaS products where if they want to do a major redesign, most of the time that the idea of the major redesign doesn’t even exist. Most software these days in the tech world is gradually changed over time: they’re constantly shipping small increments, so you don’t ever really see huge redesigns. And when they do want to make a sizable shift, particularly in software that’s used heavily—like a CRM with the sales team or something like that—they will provide ways to access the old version and the new version simultaneously.

So, this could become relevant if you’ve had, for example, someone that used to have to implicitly eyeball different reports and kind of come up with their own computation or whatever, look at a million views in Tableau to figure out—you know, my favorite comparison is, how many carrots does this grocery store need to buy over the next two months? And now you’re going to provide a model to do that. How do you transition them from that old way to the new way? Giving that intentional thought matters, that will really matter to your customers; they will value you for it. And again, you’re probably buying yourself some credit there.

So, the other aspect here is no obvious benefit to the new version of whatever the thing is. So, this can be where customers perceive the switching cost is simply too high to go to your new version. So, lots of different factors here that go—and there’s no hard line between when the honeymoon ends and when you’re kind of in the what I call the nth—letter N—nth day experience, right? You will have an infinite number of nth day experiences, but you’ll only have one first time experience, and you really only have one, kind of, honeymoon experience, which is over a range of time, but you have lots of those nth day logins and experiences, so just make sure you’re designing for all three of those phases.

Number eight, the right subject matter experts and stakeholders were not included from the start because their participation did not necessarily block technical implementation. This gets to the idea of who should be at the table during the design of this solution, during the design of the human facing—the human interaction piece of the solution that you’re building out? You have a few experts and narrow areas owning too many aspects of the software creation; that can be a problem. You know, coding check-ins and other technology work that is easily measurable becomes the focus of success here because you don’t have a product manager that’s overseeing the overall value creation, so now we’re just looking at the engineering piece.

There’s all these different ways that you can get into trouble by not having the right people in the room at the right time. The ones I hear most frequently about are often subject matter expertise not being paired up with the data scientist; you hear this a lot. But I actually think that the designers also need to be part of this, if again, the more your solution depends on a human using the tooling and the application properly, it’s really powerful if you can get that power trio together with the subject matter expertise, especially if it’s not in-house; you got to get the right people in the room. The other aspect here about subject matter experts and stakeholders who should be there—two other points on this.

One is—we hear this with ethics, but who are the people that are going to be affected by this software, this new intelligent application, or AI that we’re going to use that are not going to be in the room to help create it, but they’re going to be recipients of the effects of the software? Simply having that discussion about who could we really benefit, or who could we totally screw here in how we do this? And being aware of that, and then making a conscious choice to go out and get representation from those groups. Simply even having that discussion, I don’t think, happens a lot of the time, so that’s something you should be doing.

And on a similar note, and this is less about ethics, is making sure that you get the right internal stakeholders and leaders involved, who may not have seemed essential at the beginning of the project, but this is something that often happens with user experience design is when we look at it from a technical standpoint, we focus on the modeling and the data requirements, and the engineering, and all that stuff, and when you bring user experience into the loop, sometimes what you’ll find out is, like, “Wait a second. What the customer does is they do X, Y, and Z, and then they go over here and they do A, B, and C, and then they come back. And the only way they’re going to come back is if they get what they need over at A-B-C land first.”

And guess what? We don’t have anybody involved in this project, who has any decision making authority or has anything to do with A-B-C-land, we probably need to involve them. So, this could be something like a handoff between sales and marketing or something like this, and you only have—you have some sales representation, but you have no marketing representation because there were no marketing requirements in the original ask, but you’re going to affect the marketing department by the work you do for the sales team. So, the point is, if you don’t go through the process of journey mapping and figuring out what user experiences actually look like with this tool, you’re not going to know who you need to invite into the room.

So, part of it is knowing who needs to be there. The next part is making those invitations and getting the buy-in. And sometimes what can happen here is, in this case, marketing doesn’t have time. Or this isn’t their project, this is sales’ project with IT, or whatever it may be; you get into the fiefdoms.

And this is why having buy-in at the high levels to do this stuff the right way, to design these products and solutions the right way, is important. And if you don’t have that buy-in, then what I’d be doing is telegraphing very clearly what the risks are to the project. In this case, it may be, “Well, look. If we don’t get marketing involved, no selling is going to happen regardless of this model because this customer needs A-B-C information before they can do X-Y-Z with our new AI.” If we don’t know how to properly integrate with those applications technically, and we don’t understand the human work, and the human tasks, and jobs, and activities that happen, the chance of this entire project being successful is low. So, you have to make that business case for it, and senior leadership needs to be aware that you may need to pull in and get time from different departments to participate in the design process. So, that’s that.

Number nine is tactics and technology, like using Agile or just get some machine learning and AI into our software. The tactics are given a higher priority than producing meaningful outcomes. So, I sometimes call this the AI hammer, which is, “We need a strategy. Go hire some data scientists; just start doing some AI because everyone else says ‘we got to get there.’”

The same thing happened with Agile, I think, there’s a lot of places where this is actually still new. It’s kind of old news in the tech industry; it’s new news and some larger traditional companies. You don’t just get free, good, better software by using Agile. There’s a mindset change that has to happen, and just like with what I’m talking to you about, design is also a mindset change that has to happen. So, focusing on the tactics, and the tech, and going through the rituals of scrums and stand-ups, and we have to hire this kind of person; we probably need to get this platform up and running because everyone else is using it to do AI, et cetera, et cetera, et cetera, the list goes on and on. Those are focusing on outputs instead of outcomes, and ultimately, if you want to create value, you need to be focused on producing outcomes with the outputs that your team is going to put together.

So, when you say, “Success of the project or the product,” your data science team might have heard, “Success means high predictive power in the model,” whereas business stakeholders and customers might have a complete different idea about what it means. For a customer, it may mean, “I don’t have to spend any more time waiting in your phone queue to do X, Y, and Z, or the process of contacting customer support is so much better. I don’t even want to talk to you guys. And so, but at least you’ve made it easier for me to do that.” Their idea of what successes is completely different. So, the point is, you better have made the success criteria really clear, and really focus on the outcome piece so that you don’t get lost in the exercise of the outputs, and just focusing on tactics.

And so the last one I want to leave you with today is specific to machine learning stuff. And this is, operationalizing the model is seen as someone else’s job, or it’s been treated as a distinct phase in the project that is less important to, or just simply separate from the data science work. So, I understand that, technically speaking, it is separate, and you may have a large enough organization where it makes sense, and you really just want your data scientist only doing that modeling work. So, whoever does the work, the point is operationalization of the model is not a separate activity.

You can’t look at it like that. It needs to be integral to the way you approach the system itself, the entire experience itself. So, a well-designed experience would not decouple these two things; there’s too much overlap between how do you successfully operationalize with what the model is? Because this gets into things like, think about explainability, interpretability, and all that, that we talked about earlier. The requirements for that, if you simply build the model and you don’t understand whether or not interpretability of the model matters because deployment and operationalization of the model was someone else’s job, well, you can see what happens here, right?

If you just had your team create a black-box model that’s super highly accurate, but the team that’s going to use this is being asked to make a major switch in how they currently do their work to the new way to do their work, and they don’t trust this thing because they have no insight into how the model is doing the predictions it’s doing, then your system is going to fall flat. And this is why you don’t want to decouple it because you won’t know whether or not interpretability matters. And it doesn’t always matter. There’s times where it will not matter. And so you might not need to use that particular data science technique, or algorithm, or whatever it may be to produce the outcomes that you’re trying to get to.

So, I think it’s really important, too, is simply just to have teams building digital experiences, they need to have customer one-on-one time. They either need to be participating in one-on-one research, or they need to be shadowing it and observing it as it’s happening, preferably live. They need to watch how people do their job. We talked about this earlier with empathizing with the customer, but the more you can do that and have this become a regular cadence in your group, the better the technology and applications your team is going to put out. It’s going to avoid them going native, it’s going to avoid them thinking, “Look, it’s not my job, it’s somebody else’s.”

When you watch someone suffer, it’s difficult; it changes your brain, it changes your approach. And it’s probably the number one thing I like to do, especially when I go into very technology-heavy companies is get the technology people—especially decision-makers—get them in front of customers. Sometimes it takes recording some sessions with customers and showing a highlight reel before the light will go on. They’re like, “Wow. We had no idea what the resistance to this was going to be. Maybe we can try a small prototype first. Or maybe there’s a shorter thing, a smaller increment we could do to test the waters before we jump into a really big project together.” So, operationalizing the model, don’t look at that as a separate activity when you’re thinking about this.

So, anyhow, those are my top ten reasons customers don’t use or value your team’s new machine learning or AI-driven software applications. And if you’d like to get insights like this in your inbox, just head over to designingforanalytics.com/list and you can join my DFA Insights email list. I usually send out stuff anywhere from daily to weekly, at least once a week.

And until next time, stay safe, wear those masks. And remember, nobody cares about technically right and effectively wrong data products. Ciao.

Other Episodes

Episode 0

February 26, 2019 01:02:42
Episode Cover

007 - Jim Psota (CTO & Co-Founder, Panjiva/S&P Global) on designing a meaningful SAAS analytics product for the global supply chain

Listen

Episode 0

December 15, 2020 00:42:44
Episode Cover

054-Jared Spool on Designing Innovative ML/AI and Analytics User Experiences

Jared Spool is arguably the most well-known name in the field of design and user experience. For more than a decade, he has beena...

Listen

Episode 0

December 01, 2020 00:40:19
Episode Cover

053-Creating (and Debugging) Successful Data Product Teams with Jesse Anderson

In this episode of Experiencing Data, I speak with Jesse Anderson, who is Managing Director of the Big Data Institute and author of a...

Listen