Di Dang is an emerging tech design advocate at Google and helped lead the creation of Google’s People + AI Guidebook. In her role, she works with product design teams, external partners, and end users to support the creation of emerging tech experiences. She also teaches a course on immersive technology at the School of Visual Concepts. Prior to these positions, Di worked as an emerging tech lead and senior UX designer at POP, a UX consultant at Kintsugi Creative Solutions, and a business development manager at AppLift. She earned a bachelor of arts degree in philosophy and religious studies from Stanford University. Join Brian and Di as they discuss the intersection of design and human-centered AI and:
Twitter: @Dqpdang Di Dang’s Website Di Dang on LinkedIn People + AI Guidebook
“Even within Google, I can't tell you how many times I have tech leaders, engineers who kind of cock an eyebrow at me and ask, ‘Why would design be involved when it comes to working with machine learning?’” — Di “Software applications of machine learning is a relatively nascent space and we have a lot to learn from in terms of designing for it. The People + AI Guidebook is a starting point and we want to understand what works, what doesn't, and what's missing so that we can continue to build best practices around AI product decisions together.” — Di “The key value proposition that design brings is we want to work with you to help make sure that when we're utilizing machine learning, that we're utilizing it to solve a problem for a user in a way that couldn't be done through other technologies or through heuristics or rules-based programming—that we're really using machine learning where it's most needed.” — Di “A key piece that I hear again and again from internal Google product teams and external product teams that I work with is that it's very, very easy for a lot of teams to default to a tech-first kind of mentality. It's like, ‘Oh, well you know, machine learning, should we ML this?’ That's a very common problem that we hear. So then, machine learning becomes this hammer for which everything is a nail—but if only a hammer were as easy to construct as a piece of wood and a little metal anvil kind of bit.” — Di “A lot of folks are still evolving their own mental model around what machine learning is and what it's good for. But closely in relation—because this is something that I think people don't talk as much about maybe because it's less sexy to talk about than machine learning—is that there are often times a lot of organizational or political or cultural uncertainties or confusion around even integrating machine learning.” — Di “I think there's a valid promise that there's a real opportunity with AI. It's going to change businesses in a significant way and there's something to that. At the same time, it's like go purchase some data scientists, throw them in your team, and have them start whacking stuff. And they're kind of waiting for someone to hand them a good problem to work on and the business doesn’t know and they're just saying, ‘What is our machine learning strategy?’ And so someone in theory hopefully is hunting for a good problem to solve.” — Brian “Everyone's trying to move fast all the time and ship code and a lot of times we focus on the shipping of code and the putting of models into production as our measurement—as opposed to the outcomes that come from putting something into production.” — Brian “The difference between the good and the great designer is the ability to merge the business objectives with ethically sound user-facing and user-centered principles.” — Brian
Brian: All right. We're back with the Experiencing Data again and this week I have a representative from Google on the line. We have Di Dang. Hi Di. Di: Hey Brian. Brian: What's happening? Di: Not much. Life, you know? Brian: You're in London spreading some gospel about AI and user experience design, is that right? Di: That is correct. Yeah so I'm usually based in Seattle but this week I am in London for the PAIR symposium. Brian: Nice. Di: So People + AI Research Symposium. Where we're bringing together people from across policy, government, you know the tech industry, social good education and such, to discuss the field of human-centered AI and what we can do to ensure you know more inclusive, accessible, ethical future that incorporates AI for all of us. Brian: Yeah. I'm looking forward to digging in some details on this. Your title at Google is Emerging Tech Design Advocate. So, what... does that mean you're a designer and you've kind of moved into like an advocacy role or ... Could you explain that, what that means to the non-Googlers? Di: Yes, for sure. And even Googlers themselves so there's not many of us design advocates at Google so it's a mouthful and it's a relatively niche role. But to your point, precisely, so my background is as a UX designer and I work now in more of an advocacy capacity. So what that means exactly is that I am focused on working with the product community, working with product teams outside of Google to understand what are their challenges and pain points when it comes to emerging technologies. Like so for instance, you know what does it mean to create a usable delightful mobile augmented reality experience? Right? Or like, "Hey, what's this AI?" Another thing that people keep talking about how can we make sure that we are designing machine learning driven features that are human-centered? Where our users understand how to utilize this feature and see like what its unique capabilities and constraints are. So essentially my role an emerging tech design advocate is to help product teams, help designers overcome challenges when it comes to working with machine learning in this case. Brian: And is this largely within the walls of Google that you're doing this work or ... Di: No. No. So while I may be a Googler, my audience, the people that I work with, are all outside of Google. Or in terms like the teams, the teams and organizations that I work with are outside of Google. Because it's important for us to ... Brian: Mmhm. Di: So, I'll take a step back. At Google, the team that I work with is called PAIR, People + AI Research. And the PAIR team was founded a couple of years ago for really two key purposes. The first is to ensure that applications of machine learning are grounded in user needs. That you know we're not just doing machine learning for the sake of doing machine learning because it's the cool, sexy, hip thing to do. Right? And the second key purpose behind the PAIR team is to ensure that when we're utilizing machine learning, that we're doing so in ways that are beneficial and inclusive for our end users. And so a part of that work means that even though I work with the PAIR team, it's important for us to understand you know what does AI design development process look like for teams outside of Google? You know everyone ranging from, say, you know an NGO working in India or an NGO based in India, right? To another large enterprise, let's say, you know it's based in the UK. What are the challenges that they're running into when it comes to machine learning literacy? Brian: Got it. Got it. So, at a high level ... Like, we talk a lot on this show about ... Obviously my background's in human-centered design and we're always talking about analytics and AI as well and decision support. Right? Because a lot of these applications are about predicting or prescribing information that hopefully comes in the form of decision support and helps you make a good choice. So, from your perspective, if you could tell a data science or analytics leader one reason why design matters in the context of developing you know AI products or algorithms, specifically I would say let's talk about machine learning, I know there's other you know other technologies, but if there's one reason why design matters here, what would that be? Di: Just one reason? I only have one reason I can give. Brian: No, like the most important thing. It's like who ... So, it's kind of like so okay, so what? Di: Yeah. Brian: So I'm a leader, whatever, you know my background is in analytics or stats, math. Maybe I'm not a tech company, why do I need to care about this? Like sell me on why this matters? Di: Yeah. And I'm really glad that you asked that. I'm really glad that you posed that question to me because I'll be honest with you. Even within Google, I can't tell you how many times I have tech leaders, engineers who kind of you know cock an eyebrow at me and ask, "So, why would design be involved when it comes to working with machine learning?" You know? And it's an obstacle that I have to make sure that I clear in advance. And so, when it comes to, you know say you're from a deep data scientist and a machine learning engineering background, the key value proposition that design brings to you is we want to work with you to help make sure that when we're utilizing machine learning, that we're utilizing it to solve a problem for a user in a way that couldn't be done through other technologies or like through heuristics or rules-based programming. Right? That we're really using machine learning where it's most needed. That's one big thing, to make sure there's an actual fit. Di: Another is that we can help save time and money. We all know that working with machine learning is a very time and cost intensive process. Right? Everything from collecting, labeling, cleaning the data, to building and taking the time and energy to build and train the machine learning model. And the unique value prop of design is that we can help test out the proposed machine learning solution by faking it before you ever actually go about collecting any data or building any models. And so, I'll give an example. There's a user research and design method that we utilize at Google called triptych. And what triptych ... It's a mixed methods user research and design process that allows us to test out a number of different hypothetical solutions. Like hypothetical features that we're thinking of working on for users, quickly validate it with users in the form of user testing or in the form of user research to understand, "Okay, is this solving a real need? How urgent or priority of a problem is this?" Di: And then based on that, make sure that when we do start working more closely with our data scientists or our machine learning engineers, that their time and energy is being invested in a feature that's already proven out, that's already been validated to be impactful. And so, how triptych works is essentially you kind of create ... Well, first you start off with a lightweight survey. Right? So say there is a ... I'm actually going to use a mini case study from the Pixel 2 team where they were working on the Now Playing feature, a shift on the Pixel 2 phone a couple years ago. And essentially what it is, what it does for the user is that in the lock screen state, the pixel is able ... You know if there is music playing ambient in the background, the pixel is able to tell you what song and artist that is without you actually doing anything on the lock screen itself. Di: And that was the end feature that ended up shipping. Right? But watching back to like how did this even come to be, how did design and research and machine learning and data scientists come together to make this happen, well, the team was brainstorming potential features to work on with Pixel 2. And so one of the problem statements they landed on was you know potentially, as a user, I want to know what song is playing without actually fussing with my phone. Right? And this was like one potential problem statement of, I don't know, like other 15 or 20 other potential problem statements. So then based on that, they brought users together and shared out that problem statement along with other ones and asked them to fill out a quick lightweight survey rating how frequently they encountered this problem. Like, say, you know on a daily, weekly, monthly, whatever basis. As well as like how severe or urgent would you like this problem to be solved? And so that kind of quickly helps us like assess like how significant of a user benefit it would be if we could deliver on this. Right? Di: And then you move into the storyboard process where essentially the design team will quickly mock up or kind of illustrate three panels. The first panel represents the problem statement. So you know as a user, I want to be able to know what songs are playing in the background. Second panel is the proposed solution. So you know my pixel can automatically tell me what song is playing based on what's in my environment. And then the third panel is the impact, the potential impact for the user. So, you know as a user, I can go about my day and you know know what's playing and not have to mess with it. So, three panels, problem statement, proposed solution as well as potential impact. Di: And then we take this and we show it to our users within this research setting to try to elicit some reactions. You know not only, again, like you know what do you think about this solution and how it fits into your day to day life? But also what concerns do you have? What questions do you have? And so, as you can probably imagine, like one of the top concerns that we heard were like, "So, is this going to drain my battery life? Does this mean that Google is always listening to me?" Et cetera. And based on that, the team was able to work with the engineers to then create a solution, the you know the final solution of the Now Playing feature, that included settings for opting in and out of this for those who are more concerned about privacy as well as like a little tool tip on the settings panel that actually talked a little bit about how this works. That this is on-device machine learning so none of your information is sent up to the cloud, et cetera, et cetera. Right? Di: And so, the reason why I talk about this is that you know the team didn't immediately start in on like, "Okay, you know what's the train data that we need? And we should actually start building up models in order to have a functional prototype to test this." But we were able to test this hypothesis purely through Low Tech No Tech purposes, like purely through like a three panel storyboard to make sure that when we were ready to work more closely with our data scientist or our machine learning engineer, that we were in a good place to deliver out a solution that would actually be meaningful for users. Brian: Got it, yeah. No that's, I mean, that's fairly traditional in terms of like you know the standard design processes. Although, if I can take a guess and try to add to what you said, and you clarify if I'm wrong here for our listeners, I think part of, so part of what you get out of that, if you're a you know if you're a business leader you're coming at this from the data science or you know analytics side of the fence, is that you don't always know exactly how you need to deliver the machine learning, in terms of the format or the experience that needs to be had. And so, part of like what this process did, it also, it sounds like it surfaced the need to discuss or provide an affordance Di: Mmhm. Brian: To learn about privacy within that experience where there might be other times where like you know auto complete. Maybe we don't talk about how auto complete works when you're typing even though maybe that uses machine learning. Because through the process of doing this prototyping and getting validation, maybe you find out that users never brought it up, it doesn't matter, we don't need to spend time explaining how that works. Brian: And while that's perhaps a trivial you know minor piece of content there, it's the moving from the not knowing to the knowing that that's actually something that needs to be factored into a successful solution, which itself may have impacts on the technology choices that are used. You know we talk about model trust a lot at the show so how much interpretability needs to be given to the customer in that particular context to trust the solution and decide that they're going to engage with it. Is that also what you ... Did I properly summarize that or did I just make that up? That was my perspective. Di: No, that was beautifully articulated. So beautifully articulated I wish I had said those exact things myself. No, that ... Yeah, that was brilliant. And as I was- Brian: This show is not sponsored by Google by the way. Di: Swear to God. And as I was listening to you reflect that back to me, a key piece that I hear again and again from internal google product teams as well as external product teams that I work with is that it's very, very easy for a lot of teams to default to a tech first kind of mentality. It's like, "Oh, well you know, machine learning, should we ML this?" Right? That's a very common problem that we hear. Brian: Oh yeah. Di: So then, machine learning becomes this hammer for which everything is a nail. Brian: It's a verb. Di: But if only a hammer you know were as easy to construct as like you know a piece of wood and like a little metal anvil kind of bit. Di: Right? Like, say, if it took months to create that hammer instead, Brian: Sure. Di: And so that's why when we talk about like the value proposition of incorporating design into the machine learning process, it's starting first with the question of, "Okay, what are the user problems that we want to solve? What are the user needs that we are seeing?" And then the secondary question is how you know of all the technological tools in our toolkit, machine learning being one of them, you know how is machine learning uniquely positioned to solve this problem in a way that couldn't be solved through any other means? Brian: Mmhm. Di: And then that's the foundation that we start from. Brian: So, it sounds like you're saying that you're hearing a tendency to want to use this tool. Like we're hunting for places to whack our machine learning hammer. Are you seeing this at google as well? And I think you know we talk about this kind of FOMO I think that's there. And I think there's a valid promise that there's a real opportunity with AI. It's going to change businesses in a significant way and there's something to that. At the same time, it's like go purchase some data scientists, throw them in your team and have them start whacking stuff. You know, and they're kind of like waiting for someone to hand them a good problem to work on and the business doesn't know, they're just saying, "What is our machine learning strategy?" And so they're hunting for ... Someone in theory hopefully is hunting for a good problem to solve. Right? Brian: So how does your design group like fit into that, what I call the you know problem discovery phase? And are you interacting with your machine learning and your data science counterparts during this such that they're hopefully getting some of this rubbed off on them, and they're able to change their thinking and their approach so it scales? You know, because there's only so many of PAIR advocates, Di: Mmhm. Brian: Right? As you would call them, at google. How are they involved in that? Like that was kind of a long rumbling question but I'm curious, like are you guys all doing this together and is there this kind of problem hunt phase that involves the technical people such that they can kind of frame their work with that perspective? Di: Yeah. Yeah, yeah. And that's a great question. Like our key involvement in this is ... So, as I had mentioned ... Actually, I don't know if I actually mentioned yet. So the, and I work with the PAIR team, right? Like we launched the People + AI Guidebook earlier this year at Google I/O. And it's essentially an open source framework for helping product creators understand what's unique about making machine learning product decisions that's different from you know traditional non machine learning product design. Di: And so, one of the key things that we talk about at PAIR is like we kind of share out a set of exercises that you can do with your cross-functional team to come to that consensus. And so, something that we really advocate for and implement anytime, like let's say we're facilitating Design Sprints inside as well as outside of google, is starting off with a simple two by two exercise. Right? So you have a two by two, so four quadrants, and the left axis, the Y ... It's not the left axis, the Y axis is, say, user impact or user benefit and then the X axis is how critical or dependent is machine learning you know for this solution? Di: And then when we kick off brainstorming ideation essentially around the next sort of features that we want to work on, we bring together the entire cross-functional team. So not just your product manager, your data scientist, your machine learning engineer, your tech lead, your UX designer, UX writer, user researcher, you know we bring together all the key disciplines. And then we start, you know we'll have time to essentially brainstorm, right, on individual Post-it Notes. Like what are potential solutions or features that we think could solve a need? And so everyone documents that. And then after that kind of individual, the ideation session, starts mapping it out on this two by two. Right? Di: And so because you have the entire team coming together, the user researchers and the UX designers have a key sense of what the users' current journey is, and so where these different solutions could fit in, as well as like what their top pain points are. And so, what is really impactful of a problem to solve for users? Right? And then on the other hand, you have the data scientist, your tech lead, your machine learning engineers in the room and they're able to help that invalidate how critical is machine learning to solving this problem or can it be done through another means? And when you have everyone in the room like that, ideating, having conversations about the trade-offs, ideally you want to convert on everything that's in the upper right hand quadrant. Essentially, you know the ideas, the features that could be most impactful, most beneficial for your users as well as the ones that are actually dependent, that really critically need machine learning in order for that solution to get off the ground. Right? And it's through that kind of foundational phase that you get that buy-in from the tech side of the house as well, to move forward. Brian: Thanks for breaking that down. The next question I'm having, so I'm taking this perspective in my head that you know a lot of my work has come through tech companies. That's kind of the ranks that I came up through. Di: Yeah, same here. Brian: Yeah. And so, a lot of the verbiage we're talking about here, user research, Di: Mm. Brian: Product management, some of these roles don't exist in places. Or at least when we say product management, we're probably thinking about a digital software product or application like a commercial, you know something like that. So, what if we're talking about like small motors for ... Like I make the motors that go on lawn mowers, okay? So when we talk about product management at our company, I have like the guy that works on you know the lawn mower size motors and then we go all the way up to like large industrial scale motors. And so, that's my "product". And maybe there are some application of machine learning here which perhaps doesn't have a heavy user facing component to it, but I'm interested in this concept of like making machine learning work for my business. I don't want to just throw the technology at the wall. I'm at least aware that we should be using machine learning in the service of some customer experience or problem. Brian: So who should I send, my question is, who should I send? Like if I was to send one person off and, "Go look at this stuff that Di talked about on this podcast." And they have some guide or something, who would I send for my business to go do that? Who's the next best role if we don't have a UX lead, we don't have UX? Like it just doesn't exist. Would it be that frontline business sponsor, should it be the data scientist? Like who would you suggest takes that first step? Di: I'd say it's the person who has the most combination of authority as well as investment in the end user facing considerations. Right? And so, across ... I'm thinking now of like the teams that I work with outside of google, that's ranged from roles like research scientist who you know have that key interest in like, "How does this actually impact people at the end of the day?" To product leads, product managers, essentially like kind of like the mini CEO so to speak of this lawn mower type of product, you know this example that you gave. To even you know some ... Yeah, like I've even had like other data scientists reach out to me in the past who didn't have a background in user experience design or research but who were keenly interested in applying human-centered AI design processes. Di: So, the thing about ... And I'm glad that you pointed that out because I do try to be cognizant of like this context that I came up in around working in the tech industry. Even if you don't have a trained background as or even if your role doesn't have a user experience designer or user research, the People + AI Guidebook is meant to help break down those silos and those barriers across these different titles and roles. Right? So, it's written in a very plain spoken way so that data scientists, engineers, tech leads can pick it up with a sense of like, "What are some actual pointers in terms of like oh you know the kind of questions that I might want to ask my user to make sure that we're solving the right problem for them? Or you know like what are some things that I can think about when it comes to confidence in our goals and what it means for my users." Right? Di: And this is kind of like a larger discussion within the design field. We're all making design decisions and some of us may have more training or experience in making this design decisions than others but it definitely doesn't mean that there isn't an abundance of tools and resources out there to help us make as well qualified design decisions as possible. Brian: Sure. And I'm glad you said that. And just to clarify what she's talking about, we may, I don't think we've said it but there's ... Google has produced, largely with your leadership, from what I understand, that the People + AI Guidebook and it's pair.withgoogle.com. And I'll put the link into the show notes here. So you're referring to that. And one thing I really did like about the guide when I went through it and it's something I talk about on the show as well is, it's not so much about job roles as it is about encouraging certain behaviors and activities. You figure out who goes and does them. Di: Yes. Brian: But it's really centered around the activities that need to happen in order to get these types of kind of human-centered outcomes, if that's your goal. Brian: So, I really did like that, and it's not, it's not technical, it's something that any frontline business manager could understand. And to me, it's kind of a question of whether or not you want to move fast, you know whether you bring a designer in who's done this and who's familiar with all this kind of stuff to move fast and be efficient versus kind of learning this process on the job and learning how to keep your technology creatings at check or speed. You know everyone's trying to move fast all the time and ship code and a lot of times we focus on the shipping of code and the putting of production models into production as our measurement as opposed to the outcomes that come from putting something into production. So I think it's a nice check on how are we going to evaluate the success criteria? What do we need to be thinking about that may not be obvious, Di: Mmhm. Brian: That has nothing to do with code or model training or any of that? Well, it does have to do with model training a bit because you guys ... Actually, this is going to dovetail into my next question in a second, but I think you guys did a good job putting that together in a way that's kind of role independent. Di: Yeah. And seizing on that, I really like the way that you set up the benefits and the goals of the People + AI Guidebook. It's really encouraging to hear because that was one of our goals with this resource to begin with, is to make practicing human-centered AI design as accessible as possible for anyone who's incorporating machine learning into their service, into their product, into their feature, what have you. Right? And I'll also take a moment here to further unpack what I mean when I say human-centered AI because I know that can also sound very fuzzy or like you know kind of like ideological in a way it's like, "Oh, I'm not sure. What's the substance there?" Di: When I say human-centered AI, what I mean is that you know with all the hype and craze around AI, I know as of late, dovetailed with kind of larger misconceptions or misunderstanding around what do we mean when we say AI machine learning? How can it help us? What can it do for us? Right? There's a lot of misunderstanding and even concern out there amongst the general public. And so, when we say practicing human-centered AI design, we mean you know as product creators, service creators, how can we make sure that we are making decisions about our product and service so that our users feel in control of the technology and not the other way around? Because that's you know that's a large risk that we want to make sure that we're especially mindful of when it comes to this technology. Brian: Are you tired of building data solutions that don't get used or they're undervalued by your customers whether it's internal customers or maybe you have a software product and the customers aren't seeing the value of the data that you're presenting? If so, I'm really happy to announce that I'm going to be running a new seminar called Designing Human-Centered Data Science Solutions. It's an online seminar and seats are limited. We're going to try to keep it small and this four week seminar is going to help you learn how to work on the human aspects of making your solutions really easy to use and making the value of them really obvious. As many of you probably know, you can get all the data and technical pieces right but if your intended customer doesn't understand what it's for and how to take that data and make a decision, then it doesn't really matter what kind of plumbing and data pipelines you created in your warehouse and all the infrastructure that you might have stood up, it doesn't matter if that last mile isn't really solid. Brian: So, I want to get your customers from saying, "So what?" And, "What do I do with this data?" And kind of get them to a place where they look at your work as being really indispensable and they really understand the value of what it is that you're able to do with your technical skillset. Whether you're an analytics translator, a data scientist, an advanced analytics practitioner or perhaps you have a team of people who perhaps need some help with these non technical skills, I hope you'll check out the seminar and you can go to designingforanalytics.com/seminar to learn about the dates and the pricing and all that good stuff. So hope to see you there. Brian: It's funny you brought up kind of the hype so this really ties into something nice. So I'm getting ready to run my first training seminar in January Di: Oh. Brian: Called Designing Human-Centered Data Science Solutions. And so, as part of my own research and dogfooding this, I'm talking to people that come from the data science and analytics background to get feedback. And so, I love that you talked about kind of the jargony sound of like human-centered AI and so here's some feedback from Mike. And I don't know if he's listening to this show but, "Brian, regarding the title of your page, I don't like the title of the seminar. Human centered isn't a benefit to me. It sounds like I'm going to get a sermon on some design religion. Now, I see it's part of your core brand but that doesn't make it meaningful to me, your audience, I presume." I thought that was really interesting and apparently there's still some buzz there. Like how would you respond to someone like Mike if you heard that? Di: Quite honestly, I love that Mike said that. I really appreciate when people are just straight up candid about that kind of stuff. Yeah, it's great. Brian: Oh, he gave me the best feedback ever. Like it was a wonderful email. I loved it. Like please, like dump it. Di: Yeah keep it coming Mike, keep it coming. You know I think my Twitter, my email, whatever, like feel free to reach out to me too. I want to hear all the honest shit. I mean it. Brian: Right, right. Right. Di: Yes, absolutely. And so when I'm talking about the People + AI Guidebook or even the practice of human-centered AI design, the framing and even the vocabulary that I use, varies whether I'm talking to a UX product manager audience or the audience skews more engineer, more machine learning engineer or data science. Right? And so, I'll be honest that when I'm talking to UX and PM folks, I'd say human-centered AI, because they're already bought into the human-centered design process. I mean, that's like an actual thing that you could google and deep dive into. But when it comes to data scientists and machine learning engineers, I'll be honest, I get a lot of skepticism around like some of the more fuzzy qualitative design thinking processes where they're not quite sure like, "Okay, so what does this actually get us? Like what does this drive at? Right? Or is it just kind of empty ideation?" Di: And so, when I'm talking about the People + AI Guidebook and human-centered AI, I'll actually refer to it as you know essentially what it is, it's a resource of best practices for AI product decisions. Right. Because regardless of what role you play, if you are incorporating machine learning in some meaningful way into an end customer experience, there's going to be decisions you need to make around, say, like you know how do you onboard your user? How do you onboard your customer to using this machine learning powered agent for instance? Or how do you help set their expectation so they're not disappointed if you know, if the agent or the system doesn't respond with the kind of output that they're looking for? How do you ensure that they don't over trust your machine learning system or the agent? That you know they know when to override it with their own intelligence? Right? Di: I mean, there are so many ... Like at the end of it, if you're using machine learning to help solve some business goal for you, chances are those business goals are going to be connected to some customers or to some users. And so, how your users, your customers interface with whatever machine learning solution you have, is a huge consideration that goes beyond ... Like that's the harder, I guess less squishy kind of like people interest sounding side of human-centered AI. Brian: Right. Di: It's like really, a human-centered ... On one hand you can call it human-centered AI but the same face of it is essentially how do you make decisions that set up your service for success? Brian: Mmhm. And again, for our audience here, when she talks about customers, I think we sometimes have to change ... You know that language can be solution instead of product, Di: Mmhm. Brian: If we're talking internal and the customer here may be your internal business sponsor. So, this is ... If you're working at a bank and you know like you're doing ... Like we're trying to predict give a loan, don't give a loan, you know using machine learning and you're providing decision support here. These are the types of checks that the design process can put on measuring whether or not the machine learning outcomes are actually helping the loan officer make a loan approval decision or not. I think you can start to hear from the way you explained it, how this ties in, this is very customer-focused in the sense of a customer being this internal employee who's going to use this. It doesn't really matter, the point is, even bankers are humans. Brian: So, even the bank employees you know count as a customer. And if they choose to ignore the solution because it's either, maybe it's not transparent enough in how it works or it's too prescriptive, whatever it may be, that would be your measurement or your check. And that's both the qualitative and quantitative check. Right? You can probably somehow measure whether or not the recommendation is accepted or not accepted. But there's also the qualitative part where you get that kind of verbal feedback Di: Mmhm. Brian: Or you do some kind of usability testing and you kind of get those reactions to this you know loan officer who's supposed to be using this tool and why they may be ignoring it or what makes them not trust it and this kind of thing. So, that's really the benefit. Brian: And then ultimately, if you have that business responsibility, then that's where butts are on the line, right? Because if you spend 25 million dollars in six months building some giant you know thing that's supposed to help you predict give a loan, don't give a loan and the loan officers are still using their old recipe to do everything because maybe they got burned or they saw some really suspicious you know recommendation that they didn't trust and they kind of just put it aside as this thing isn't ready yet, that's where you have to decide, "Well, was it worth 25 million dollars?" And someone eventually is going to ask a question like, "What did we get for this 25 million dollars that we spent? Why are people not using this? We're not seeing an income change, we're not seeing loan approval rates going up or whatever that metric is that we want to measure against." So, I think it's important to remember that a customer here can mean internal business sponsor as well for the non tech folks out there. Di: Yes. And I'm really glad that you drove that point home because you know I had mentioned upfront that a lot of my work is helping teams outside of google overcome challenges in working with machine learning, right? Or in designing with machine learning. Brian: Mmhm. Di: And for a lot of folks, maybe less so for your users who already have like a deep data science background, but for a lot of the teams that I work with, they're still, you know they're very keenly interested in what this "new" world of machine learning entails but they're just trying to wrap their mind around that vocabulary even. Right? And like what are the capabilities and strengths of machine learning for instance? Brian: Mmhm. Di: And so there's... A lot of teams, a lot of folks are still evolving their own mental model around what machine learning is and what it's good for. But closely in relation, because this is something that I think people don't talk as much about maybe because it's less sexy to talk about than machine learning, is that there are often times a lot of organizational or political or like cultural uncertainties or confusion around even integrating machine learning so to speak. Right? So, you know like kind of to your point, like you know for this 25 million dollar investment, like what am I going to get from it? Or you know how does this improve you know from the baseline? Or you know like if I'm working in an organization that may be more risk-averse, why would I take a bet on machine learning? Di: Or you know like the solution itself is never going to stand alone in a silo, right? But like your tellers are going to interact with it, like you know your bank customers are going to interact with it, you have bank managers who are going to have some kind of touchpoint with it. There are a lot of other people involved in this system around whatever machine learning solution it is that you happen to implement. And so, we need to have an understanding of what their concerns and what their challenges are before we ever start working on the solution to begin with. Because then otherwise it just falls flat on its face. Brian: Yeah, yeah. One thing we even said other people, one thing I liked in the guidebook was there was even talk about designing the radar experience. So if you're labeling data, Di: Mm. Mmhm. Brian: When we talk about human-centered, right? Where you might even be talking you know ethically about the people who aren't even in the room, Di: Mmhm. Brian: So this is partly where designers can help you think about what's not really obviously present there because we're always thinking about that experience piece, right? But also how is this going to create business value? And sometimes those things are at odds with each other but that's, to me, part of ... That's the difference between the good and the great designer, is that ability to kind of merge the business objectives with ethically sound and user facing and user centered principles. It's both of those and it's partly why I don't like UX as much because I feel like UX is always talking about one half. In a commercial setting at least, you're talking about one half of the story and there's really two pieces that need to come together in order for most of these solutions to be viable. You know? Di: Yeah, absolutely. Brian: So, we're getting close to our time here. I wanted to go one nerdier level down on something here just for some of our more technical audience that might be listening here. But I was curious after I read the guidebook and is there like a different approach or considerations that need to be given when you pair up you know human-centered design and machine learning based on the type of technical model that's being used? Di: Mmhm. Brian: So, it's a decision tree or it's a you know a random forest or whatever that model is that needs to be used. You know I'm guessing there's times where that model ... There's a strong preference to use the model for a technical reason or it's almost like you pretty much know you're going to need to use that. Does it change what activities the design group will go through in order to get that right or they're pretty independent? Di: Oo, so for that, I will be honest and tell you upfront right now that I could not tell you the nitty-gritty differences maybe between like say, a random forest algorithm and a convolutional network or whatever else. In terms of like the kinds of models, I wouldn't be able to speak to the technical details under the hood of it but what I can ... And I say this, it's something that I continue to try to learn because the more I understand the easier it'll be for me to work with my own machine learning individuals. Brian: Mmhm. Di: Right? But for folks at large who are utilizing the People + AI Guidebook or who are thinking about the end user facing considerations of machine learning, you know if you're not the one who's directly working on like collecting and cleaning the data for instance or if you're not the one who's directly working the model itself, you still are empowered to make sure that you are making decisions, product decisions, business decisions that are still in the best interest of your end users. Brian: Mmhm. Di: And so, I'd say for folks like myself who come from more of a user experience or a you know a product background, it's helpful for us to understand the nitty-gritty of models but it's not necessary because what we have the most stake in is to make sure that we understand how utilizing machine learning impacts the end user experience. And that's something that stands independent of the actual model itself. Brian: Yeah. And so, my question wasn't so much the different between the models but whether or not the design activities we do change because of a technical model choice. Di: Ohhh. Brian: So, neural network versus a random forest. Oh, it's neural network and for some reason we know it's going to have to use this technology so therefore we're going to do three times more of sugar and two parts less flour in our design recipe that we go through because of that technology choice. That was more the question. Sorry. Di: Mm. I see. No, no I understand now. Not in my experience, no. Brian: That's all right. No, I was just curious. Cool. This is ... I mean, you've dumped a ton of great stuff on our listeners. It's been really fun to talk about this. So, I'm curious, do you have any, again, thinking about kind of you know our audience you know data science leaders and analytics leaders that may be coming at this from that you know technical math stats perspective, any final takeaways that you would give to them about either putting this into play or getting past the feeling that that sounds like it takes a lot of time and it's going to slow stuff down. I can almost guarantee there's probably people thinking about that. Sounds really nice, who has time for that? How would you send them away from this episode? Di: Yeah. Well, I'd say you can either save time at the upfront or you can potentially lose time on coming out the other end. Right? What if you, you know what if you spend all that time investing in machine learning and you find out actually no it's not the output it's generating isn't useful for our users or they don't trust it, or they have other quantum considerations over the fact that machine learning is even at play at all. I mean, that would be a real pity to find that out at the end of you know nine months and all this funding as well, that you've invested in the effort. Right? Versus validating very early on having the confidence that you are on the right path to solving the right problem. Brian: Sure. I would 100% second that. I think you just built your technical debt up. You have a team now that's spent, you know however many months, building the wrong stuff that nobody wants to use. Your trust, the perception or the quality of your work and the value that you're providing is hopefully absent and maybe in some places, no one's even paying attention because there's a lot of money being thrown at this space and not a lot of accountability it seems a lot of the time. But it's so much more fun to work on stuff that people want to use and at least as a maker. And I tend to think of data scientists as problem solvers and makers. So they are in the family of designers and it doesn't really matter whether we call them designers or not, because it's really about the activities and behaviors that we go through. And I don't know it's, even just a fun and job satisfaction level thing too where you start small, you try to make small improvements and monitor them over time and it just to me, it builds up so much more rapport with the teams and it's just a much, it's a longer term ... Instead of thinking about each project level, right? It's about a team that's starting to look, being looked at as strategic, you know? Di: Yeah. Brian: Cool. Where can people find out more? I know we said the pair.withgoogle.com has the guidebook on it and I'll put that in the notes. If someone wanted to just kind of follow your ramblings, do you ramble on the twitters or the LinkedIns or where are you? Di: Yes. So you can find me on Twitter, my handle is @Dqpdang, and Brian can also make sure I send that to you. Brian: Yes. Di: And you can also find me on LinkedIn. You know feel free to reach out and send me a message. I don't check LinkedIn as often as I should but my inbox is open as well on Twitter so feel free to reach out anytime with feedback, questions, concerns, honest, candid feedback, Mike, if you're listening out there. I'm game for all of it because you know the People + AI Guidebook it's a starting point. It's a living document. You know when it comes to. Brian: Yep. Di: Software applications of machine learning, we have, it's a relatively nascent space and we have a lot to learn from in terms of designing for it. Right? And so, the People + AI Guidebook is a starting point and we want to understand what works, what doesn't, what's missing so that we can continue to build the set of best practices around AI product decisions together. Brian: Great. I love it and thanks for putting this out and sharing all this good stuff with us. This has been a great episode. Di: Thank you for having me Brian. Brian: Yeah. Cheers.
Today I’m chatting with Iván Herrero Bartolomé, Chief Data Officer at Grupo Intercorp. Iván describes how he was prompted to write his new article...
Today we are joined by Dinu Ajikutira, VP of Product at CiBO Technologies. CiBO Technologies was founded in 2015. It was created to provide...
Angela Bassa is the director of data science and head of data science and machine learning at iRobot, a technology company focused on robotics...