033 — How Vidant Health’s Data Team Creates Empathetic Data Products and Ethical Machine Learning Models with Greg Nelson

February 25, 2020 00:41:24
033 — How Vidant Health’s Data Team Creates Empathetic Data Products and Ethical Machine Learning Models with Greg Nelson
Experiencing Data with Brian T. O'Neill
033 — How Vidant Health’s Data Team Creates Empathetic Data Products and Ethical Machine Learning Models with Greg Nelson

Feb 25 2020 | 00:41:24

/

Show Notes

Greg Nelson is VP of data analytics at Vidant Health, as well as an adjunct faculty member at Duke University. He is also the author of the  “Analytics Lifecycle Toolkit,” which is a manual for integrating data management technologies. A data evangelist with over 20 years of experience in analytics and advisory, Nelson is widely known for his human-centered approach to analytics. In this episode, Greg and I explore what makes a data product or decision support application indispensable, specifically in the complex world of healthcare. In our chat, we covered:

Resources and Links:

Vidant Health Analytics Lifecycle Toolkit Greg Nelson’s article “Bias in Artificial Intelligence”  Greg Nelson on LinkedIn Twitter: @GregorySNelson Video: Tuning a card deck for human-centered co-design of Learning Analytics

Quotes from Today's Episode

“We'd rather do fewer things and do them well than do lots of things and fail.”— Greg “In a world of limited resources, our job is to make sure we're actually building the things that matter and that will get used. Product management focuses the light on use case-centered approaches and design thinking to actually come up with and craft the right data products that start with empathy.”— Greg “I talk a lot about whole-brain thinking and whole-problem thinking. And when we understand the whole problem, the whole ‘why’ about someone's job, we recognize pretty quickly why Apple was so successful with their initial iPod.”— Greg “The technical people have to get better [...] at extracting needs in a way that is understandable, interpretable, and really actionable, from a technology perspective. It's like teaching someone a language they never knew they needed. There's a lot of resistance to it.” — Greg “I think deep down inside, the smart executive knows that you don’t bat .900 when you're doing innovation.” —  Brian “We can use design thinking to help us fail a little bit earlier, and to know what we learned from it, and then push it forward so that people understand why this is not working. And then you can factor what you learned into the next pass.” — Brian “If there's one thing that I've heard from most of the leaders in the data and analytics space, with regards particularly to data scientists, it’s [the importance of] finding this “other” missing skill set, which is not the technical skillset. It's understanding the human behavioral piece and really being able to connect the fact that your technical work does have this soft skill stuff.” — Brian “At the end of the day, I tell people our mission is to deliver data that people can trust in a way that's usable and actionable, built on a foundation of data literacy and dexterity. That trust in the first part of our core mission is essential.”— Greg

Transcript

Brian: Hello, everyone. Welcome back to Experiencing Data. I'm Brian O'Neill, and today we have Greg Nelson from Vidant Health on the line. Hey, Greg. How’s it going? Greg: Hi, good afternoon.Brian: Hope you’re warmer than I am. I’m freezing already. I’m ready for Arizona. Greg:  No, it’s a beautiful blue sky day in North Carolina. Brian: Alright, excellent. Well we’re going to nerd out to some analytics and health stuff today. So Greg is the VP of Analytics and Strategy at Vidant Health. You’ve also authored the Analytics Lifecycle Toolkit. I wanted to chat with you a little today about health data. There's a lot going on in this space. I want to give you a chance to introduce who you are and what you do. But I feel like healthcare is such a gigantic mess. There's so many factors. There's technology issues and ethics pieces and motivations in place. A lot of the issues are not even technical, I'm sure. It's the incentives that are in place. How do you live through this? How do you make a difference when there's so many things that you can't control? What's your take? Are you just like, “This is our little hill over here, and we're going to win this battle, knowing that we can't win the war?” Greg: No, no. We’re absolutely about winning the war. So I’m a recovering social psychologist. You talked about incentives, and I think for me it's actually kind of like a puzzle, and you're trying to figure out what are the incentives at play? Who is being motivated to do what? And as you described, healthcare is incredibly complex. The people that pay for healthcare aren't always the people who receive healthcare. People are incentivized or maybe even see part of the customer journey that leads to health and wellness. So I view this big landscape of healthcare as being just right, because of the level of complexity, to keep it interesting for me. Brian: Tell us about the part of the puzzle that you’re working on at Vidant. What's your role there, and what's your puzzle piece look like? Greg: Yeah, so it’s a big puzzle. It's one of those 4,000 piece puzzles you get, and half the pieces are missing, which makes it even more interesting. So my role is, I lead the enterprise data and analytics team here at Vidant, and I have operational responsibility for all of our data systems, which include operational data systems, data movement systems, data capture, and monitoring. I also manage enterprise information management, which includes things like governance, data governance, and data integrity, information security and privacy. Data science and BI falls under my remit. One area that I am always excited about talking about is our analytics product management and enablement group, which is helping make sure we build the right products for the right reasons. We enable users to get the most out of those. Brian: Got it. So the analytics product and enablement group … I think you said something along the lines of, "Making sure they're building the right products for the right people." So how does one go about making sure they're doing that in an enterprise space? Greg: We’ve introduced product management as a discipline and a skillset here at Vidant. It's brand new. Really a lot of what I talked about in my book was—much like what you talk about with your content—how do you create data products that are human-centered and usable? All too often, and I think you and I talked before about this, is this notion that we build products. And when I got here, we were right in the midst of decommissioning 10,000 reports, many of which were lightly used or never used. And so the question becomes, in a world of limited resources, how do you make sure we're actually building the things that matter and that will get used? So product management focuses the light on use case-centered approaches and design thinking to actually come up with and craft the right data products that start with empathy. That's kind of what we have been trying to teach and do, as part of our practice. And that extends, of course, with durable data products that you might see, like reports or AI models, but also just services. So how one experiences our enablement programs, for example, is a designed service. So it's been fun. Brian: Sometimes I try to talk about the design process to non-designers, when we're talking about design thinking as being very intentional about the outputs and the products and things and tools and applications we're building, as opposed to like, “Well, this is kind of what emerged, and at the end we polished the turd,” as it sometimes is. This is about being very intentional throughout the process. But I feel sometimes with technical people it can feel a little bit hand wavy or kind of squishy. It's not very scientific. How do you build in a culture that wants to do this in a place that I assume you have a lot of technical people working, especially in the analytics area. How do you do that? Greg: Well, I like your word choice there, of being intentional. We talk about that all the time. It's being purposeful and intentional and transparent about what we're doing. I think another way to talk about that is, we have been engaged in lots of activities over several years. Activities, as you know, don't always lead to outcomes. And so our focus is to try to avert the urgent in lieu of the important, and really focus on, ‘What are the outcomes we want to achieve?’ One of the ways we've done that is to try and use a use case, human-centered approach to actually fulfilling the needs for an organization. As an example, we recently did a gap assessment. There was some new technology being added onto our electronic health record. The business was quite concerned, because they were moving away from older technology, which was tried and true for them. They were concerned the new applications would meet their needs, both in terms of operational workflows, but more importantly in terms of the decisions they needed. They needed to be able to move patients around our system more efficiently than they've ever been able to do before. And so we approached that using design thinking to craft "as a”—so as a bed manager, or as a transfer nurse— "I need to" and that helps define what is the decision support need. And then, "so that I can." The "so what/so that I can," piece was actually the most important, because that really focused on the outcome. Instead of saying, "Hey, I need a report, can you add another column and sort it like this?"—that only gets looked at once—we're able to then focus our energies on building the prioritized products based on the "so what" question. To me, that was fun. And it changes the narrative. I think our technical staff, to your point, we like building things that people use. I could see how many hits my site got, or how many times an AI model ran and predicted the right thing. That was fun for me, and I think our technical folks are no different than that. They want to see real utilization of data products. Brian: Sure. I mean, it’s also a lot more fun to work on stuff people care about. Especially if you’re in healthcare. You could probably feel a cog in this giant system. But when you start seeing how it makes a difference in someone’s life, potentially literally in the healthcare space, I think it provides fuel to keep doing things that way, instead of falling back on the old way. For people that maybe aren't familiar with this kind of Mad Libs template that they're talking about, the "As a role, I need to do X so that Y” outcome happens,” the "so that Y" part is so often missed. I think there's a lot of stuff that's built just with the beginning part—"As a whatever, I want to do X, period, end of story”—and there's no evaluation of what experience is needed in order to facilitate that last piece. That's so critical. So is there an example you can give the listeners about maybe the old way things were being done, and how when you tried this, there was a particular learning moment that happened, or you saw a light bulb change for somebody, or you had a different outcome on the project because of this? I'm interested in before/after. Greg: Yeah, I think the proof is in the pudding, so the jury is still out on the latest example that I shared with you. But another example, and I won't go into the details too much, but historically, people deliver data. And I talk a lot about whole brain thinking and whole problem thinking. And when we understand the whole problem, the whole “why” about someone's job, we recognize pretty quickly why Apple was so successful with their initial iPod is that it didn't solve just the problem of storage. It didn't solve just the problem of finding music, and it didn't solve the problem of listening to music on the go. It actually solved a whole system of problems for people. And so a good example is, as most healthcare companies are facing today, there's a nurse shortage. We can't afford to lose the nurses we have. We can't hire them fast enough, and if we don't take care of them, it will continue to represent financial challenges for any health system for us to pay contractor traveling nurses, which we know that's just a fact of life with changing demographics. So before, our leadership was getting one report from one department, another report from another department, generating another report internally, and a fourth one was being calculated at the end of shifts, literally on the back of paper and being keyed into another tool. Whole problem thinking is understanding those personas and actually working with them to understand, "Tell us where you sit in this process, and what part of that problem are you trying to solve?"  As we grow up as a system, with almost 2 billion patients spread over 29 counties, we have a tremendous responsibility to our team members and our patients. So helping them become more efficient and being human centered increases team engagement, team satisfaction, and patient satisfaction. But also it reduces the friction and accelerates the opportunity to make decisions in a more timely way. To me, that's the fun part. That's the piece that keeps me coming back every day. Brian:  You mentioned product management. Is it a new function? Is that what you were saying? Greg: Yes. Brian: What was it before, and what was the indicator that made someone say, “Oh, we actually need this to be a separate organization?” Someone doesn’t just wake up and say, “We need product now.” How did that come about? Greg: Sure. Well, that was being driven primarily from me and begging and pleading for an additional resource that could focus on this. And the key message there was that we need to move from projects to products. One of the manifestations of what necessitated that move was, our customers expected an ongoing dialogue and a relationship about product maturity and durability and change Projects are defined for a reason. They have a purpose. They have a beginning, they have an end, and you move on to the next thing. With data products, those that are being used hopefully will change, because they've been in use and we get refinement. And so there was this reflection that because product durability wasn't being seen in the project deliverables—they would be done and then would be turned over to operational support for maintenance— is that that voice of the customer never gets taken out of the lifecycle of a product. We thought that was such an important point to actually staff it accordingly. Brian: Uh-huh. So does your product management group lead this human-centered effort to make sure these data products are built around desirable outcomes, really with that focus on outcomes? Or do they facilitate the design process and all of that? Or is that coming more from your group? How do you work with them? Greg: Yeah, so the analytics project manager sits within my organization. And to be clear we have only one of these roles. So it’s an evolution of understanding and maturity. But healthcare, as you've cited, is really complex. Oftentimes we do feel like a cog on a wheel, because it is so complex, and there's so many participants involved. But one of the things that's fun about that also is that it's a multidisciplinary problem. Our informaticians are working with our clinical front line workers on workflows and optimizing those kinds of things. They're a part of that equation. Our application teams who are actually building interfaces are a part of that team, because those interfaces will effectively capture data that we'll be able to use later in data products or feed back to them. Of course, our clinical and front line workers are critical components of that user story and being able to craft that. So the reality is, it is a multi-disciplinary problem. Even though our analytics product manager is just one of those people driving the conversation to completion, it really is just a part of an overall strategy. Brian: Sure. And do you have designers involved in this process as well, or not so much? Greg: We have people who play those roles. We do have a wonderful chief experience officer and she has a whole team who does experience design, and so we tap into her group, who are absolutely amazing. But then our group, we're doing a lot of things through our bootstraps, as they say. Brian: Got it. Can you tell me more about how you interface with them? I assume, maybe incorrectly, but they’re probably doing a fair amount of this front line research? Greg: Yeah, so we have a number of governance bodies, a number of groups around the organization. Some are focused on providers and physicians. Others are focused on nursing, workflows, others internal shared services functions like HR and finance. A good example is, we're developing a program to allow patients to express gratitude to their provider in the organization. So that team is actually creating an experience so that it reduces friction in the process, gets us really good transitions of touchpoints. So there's a lot of touchpoints in patient care, and a provider can't be expected to do all those things. So having the experience design folks actually building that patient-centered experience is absolutely key and, I think, fun to be part of. If I could go give up my wonderful career in analytics, it would definitely be in user experience design. Brian: Nice. That sounds really interesting. So this sounds good, but tell me about the journey to get to this point where you're doing things this way. Was there resistance to this human-centered approach, this design thinking approach? Did it feel hand wavy at all to some of the people you're working with? Did it have to kind of be pushed on them a bit before the value became obvious? What was that like? Greg: So recognize it is a journey. We still sell every day as to why user-centered design is critical. The good news is, our chief experience officer was here about two years prior to me joining. So she had already laid the groundwork and the language about how we design experiences for team members and patients. For me, I was simply coming on board and applying it in a different domain, that is, in the world of data products. So the ground was fertile. There’s still a lot of confusion and misunderstanding of the difference between a product manager and say a project manager. There was a lot of work to help people understand the difference and why we need someone in those roles. A great example is, when I first got here, Vidant has about 13,000 team members. We were dealing with requests from 13,000 people. One of the first things we did is a wholesale stakeholder analysis, where we looked at who truly are our customers. We narrowed it down to 18. It might seem like a big jump, but it's given us voice. It's given us the ability to have a conversation at the highest levels about what our priorities are, where we should be spending our time.The reality is, we went from 365 tickets, if you will, in backlog for requests, to zero. We did that in nine months, all by focusing on the conversation about what's important. Brian: Wow, that’s pretty significant. So this dovetails into my next question, but how does one get from 10,000 unused reports to knowing they’re not used and not important, to actually being able to just get rid of them? How did you do that? Is it the same process? Greg: Yes, it’s interesting. One of the things we looked at was what was actually being used. Our first pass at decommissioning was, if it hadn’t been used in the last 24 months, we just turned it off. There was no conversation about  it. Some of those have crept back in and said “Hey, wait a second. I do this every three years for some accreditation,” and we’re like “great.” So we rewrote it in the newer technology. Those were pretty minor. But you mentioned utilization, right? So how do we know whether something is successful? I firmly believe that we need to design in and consider a persona that might be left out in a lot of our analytics products, and that is as a analytics leader, I'm a persona. I'm a stakeholder in the design of that product. We're requiring all of our products heretofore are actually taking into account my needs as a stakeholder. I need to know: Who's using this? When was the last time they were using it? How long did they stay on pages? All the stuff that you would normally expect, but often gets left out of analytics products, because we don't consider the actual analytic developer as being a key stakeholder in that. Knowing that, that we're going to design in need to understand utilization and stickiness to data products, if you will, is critical. Sometimes there was a lot of work that went on to get us to an understanding of what was actually being used, but I think we've done a great job at actually shifting that, so now that all data products require some metrics associated with how we're going to measure their effectiveness. Brian: Cool. I like that, because it takes away this, "We can do something, throw it over the wall, and then check off that we accomplished something" [approach]. That's the beginning of the journey, not the end, right? Like, "Okay, now it's there. Now we get to see if anyone uses it to do something. This is the beginning. It's not the end.” Greg: That’s exactly right. Brian: Let’s shift over to ethics a little bit here. You had mentioned the word "empathy" earlier, which is really the foundation of creating human-centered experiences and products and solutions. So tell me about how you're approaching ethical AI and your playbook around that. Greg: Yeah. This is one of the areas that I think is just fundamental to what we do. At the end of the day, I tell people our mission is to deliver data that people can trust in a way that's usable and actionable, built on a foundation of data literacy and dexterity. That trust in the first part of our core mission is essential. If I deliver data that's wrong, or inconsistent, or somebody shows up with a different report, reporting a different metric, then I've lost all credibility. The same is true with AI models is, "What was the process?" My staff will probably cringe when they hear me say that, but I often say, "If we develop something in a robust and repeatable manner, then the outcomes are often robust and repeatable." So we've taken a risk based approach to assessing how we validate every data product that we deliver. If, for example, it has broad reach, broad implications, or a large number of patients that could be affected with a decision that is based off of the data that we deliver, that's high risk. If it's a one-off—which we hope we don't ever do, but sometimes we do what are called investigative research programs where we're just investigating the order of magnitude so  it just has to be directionally correct—then we'll do a different level. So we quantify that in the risk matrix. Depending on where you fall on that risk matrix, then the level of validation that goes into that becomes critical. That serves as input to our AI playbook, whether we're engaging in an outside vendor to adopt one of their commercially available AI models, whether we developed it in house, or we've acquired a model through a partnership or collaboration from another organization. They all follow the same core principles, which is we will do proactive testing against algorithmic bias. I wrote a paper back in the fall for the North Carolina Medical Journal on ethics in AI, and really focused on this idea of algorithmic bias and how do we prevent that in getting into our data products. I think it's a critical thing. We'd rather do fewer things and do them well than do lots of things and fail. Brian: Mhmm. What are some of the challenges you have on the non-technical side of using some of these more predictive and prescriptive technologies at Vidant? Greg: Yeah, so the number one rule that I have when we go to develop a predictive model is that we have consensus in our workflows, processes, best practice protocols, whatever you want to call it, for how we respond to something given a target variable. So let's say we're trying to predict something, like a patient is septic or not. Pretty important thing to get right, but more importantly is, what do we do about it from a clinical perspective if we don't do it right? Fundamental to any development of an AI model, we must agree clinically on what is the course of action to take. If we don't have that, then we really ought not to be even talking about a predictive model, because we're going to see variation of response, and our predictive value will continue to be eroded by inconsistent variations in process. So I think that's probably one of our biggest challenges. The second is the identification of high value use cases. Part of what we're trying to do is elevate the understanding of what can AI do for us. For example, recently I presented to our senior leadership team on what we're doing in AI. What is it? What are its implications? What are the high value target use cases? There's lots of things that we could do, but which ones are the things that's going to matter? The way we think about those is, those that are aligned to our strategic priorities are the ones that we're going to be focused on. But getting people to put energy into validation, into input as to feature selection, all of those things, requires conversation in a new language. I think that that's one of the things. If I can claim a super power, it would be the ability to actually bridge the conversation between business and technical folks. But there's a rate limiting factor there with me, so that's not a universal language. Either our business speaks tech, or our technical people speak business. So I think we just have to strive to get better and better and better at that, so we can make sure we're bridging those gaps. Because you can't do it. I think any organization that builds models and thinks it's a technical exercise is going to fail. Brian: Yeah, I would agree with that. So how do you go about scaling this so that you don't have only one person that can do this? Is this a training issue? Do you train the technical people to get better at doing this, your data scientists, the analytics people? Or do you train the product and business front line managers to come to a data group with more informed problem statements and things like that? How will you scale that? Greg: Yep, great question. As part of our analytics enablement program, we've actually developed a course, a curriculum around question design, really intended for our business people who want to be able to ask questions of data, but don't need to get into the technical details. So I actually think it's both. We've got to make sure that people can ask questions. Our chief operating officer for the health system and I are going to be rolling a program out to actually help people create some awareness of why this is important, the fundamentals of question design, and being able to get to the crux of an issue. You can ask a question and get 10 different answers, depending on how the person that actually interpreted that question. It could be all over the place. And on the other side, as you point out, the technical people have to get better. We have to get better. I'll classify myself as a technical person. We have to get better at extracting need in a way that is understandable, interpretable, and really actionable, from a technology perspective. It's like teaching someone a language they never knew they needed. There's a lot of resistance to it. Brian: What kinds of resistance? Greg: Well, from the technical side we often hear “Why can't they just answer the questions? Why can't they just tell us what they need?" But it's much like Steve Jobs. If he had asked people what they wanted in a mobile phone, they probably would not have described the iPhone. So it's a little bit of creativity that is not typical of the skill set of a traditionally technical person. I hear these dialogues all the time. It's really quite comical. I spent 20 years in consulting, and the conversation was the same in pretty much every organization. Business people would say, "I need X." The technical people would say, "Well, send me your requirements.” And the business person would say, "Well, tell me what's possible." And this would be this circular conversation. We introduced design thinking into our consulting practice about five years ago, as a way to bridge that conversation. It was tremendously successful. Once you start talking about empathy, it's really tough for technical people to revert back to, "Send me your requirements," because you've completely changed their dialogue. Brian: Yeah, the power of the why question. I fully agree with you on that, and I think for technical people listening to this, I think you have to check it at the door if you make the assumption that someone necessarily knows what they actually need. It's the leading indicator. They don't know. It's the analogy like the cast on the arm. "Doctor, I need a cast." Your response should not be, "Okay, what color should it be, and how thick would you like it?"  It’s, "Well, no. We're going to do a diagnosis. Why do you think you need a cast?" "Well, my arm hurts." You have to go through a diagnosis there. Some people will be better at saying, "No, I actually do need a cast. I can tell you why I need a cast, and here's the problem space." That sounds great, I think, for a technical person to receive that type of request. But at least in my work, that's a myth. Even with my own clients, they don't come to me with that well formed of a problem definition. It needs to be unpacked. So whether you're doing that work, if you're on the data science or analytics side or if you're on the product management side, somebody needs to be doing this work, and I think it's mostly a team sport. You can't have a conversation with one person anyways, and you definitely can't have a good product conversation without talking to an actual end user, as well. But we have to check those assumptions at the door that someone actually knew exactly what they needed. It sounds bad, in a way, when you say it that way. Like, "This is your whole department, and you don't know what you need." But we make assumptions about what the other person knows about our space and our domain, and what you might think is totally obvious, that isn't to somebody else. So I think the design process helps us be objective and open ourselves up to not knowing, and then getting to a place where we start to know, and we carefully prototype quickly, to fail quickly, and then learn moving forward, without doing a giant technology investment in something that may not pay off. And it doesn't build trust, and if people don't trust you—not so much that you're a liar and a cheat—but [people will say] "I can't depend on this group to deliver goods. It takes forever, and whatever I get back I can't use it. It didn't really help, and by the way, it's too late now. I'm already on to the next thing. Whatever. Just throw it in the report drawer and shut it." You know, 10,001. Greg: That's right. It's interesting. I'm going to steal that analogy about the cast, because that's perfect for what we often encounter, and certainly will resonate in this environment. You mentioned something I'm going to add to, which is this idea of failing fast. That's a cultural mindset shift that I think has been challenging for us as an organization, as well, is, "Hey, we're going to do a project. We're going to be successful. Here's what the measures of success are." Oftentimes the measures of success are, "We did it on time, on budget, within scope." Being able to create an innovation culture where learning is the measure of success, is absolutely a mindset shift that is really hard. I don't know what your experience has been. Brian: Yeah, well, absolutely. No one really wants to spend money and time and resources on that, unless it's a really enlightened leader. But I think deep down inside, the smart executive knows that you don’t bat .900 when you're doing innovation. You bat .200. You suck. It takes a long time, let alone to build a repeatable process to do innovation. That means that you obviously went through some type of fail process, and then you learn, and you inform it going forward, before you can have a pattern of doing it. It's hard to swallow that, and I think that's why starting small, using small projects—I like using small teams—and you build it out that way through a little kernel, and try to spread it that way, as opposed to implementing it top-down as in, "We're now going to do innovation. That's our mission." It's like, "Good luck." It doesn't generally work to try to shift the whole organization. So find a small project, I think finding the right-minded people for that, but you also have to provide a runway for that group to do it. Someone has to say, "This might not work, but we can use design thinking to help us fail a little bit earlier, and to know what we learned from it, and then push it forward, so that people understand why this is not working and then you can factor that in, maybe, to the next pass. As opposed to like, "This project is now nine months behind. No one really knows why." We're measuring code check-ins, and bug reports that we've squashed, and commits, and how many sprints did we finish? And we are using agile, so we're doing something right." It's like, "Well, all we're doing now is measuring engineering activities. Because we're totally disconnected now from whether or not any of this is going to matter." And it's easy to measure that because you can count the bugs, you can count the sprints, you can count the number of commits, and so on. II don't know if that's how it is on your end, but it's tough. It's not easy. Greg: Yeah, the classic adage is, "Tell me how someone's measured, I'll tell you have they been paid. If you measure by lines of code, guess what? I’m not going to be efficient with my code. Brian: Yeah. So given where you are now and your experience here, is there something you would have changed since you got into this whole data space and health? If you were starting over 10 years ago, is there something you would change about where you are now or how you approach your work? Greg: That’s an existential question, isn’t it? Brian: I wish I had done it this way, or something like that? Greg: Yeah, it's fascinating. I don't think I would have changed anything. I may have spent a little less time on learning new things, and perhaps more time on developing deeper relationships with some of my customers. I think the human centeredness of what we do is just so critical. So that's probably the one area that, you know, how are they incentivized? What keeps them up at night? So I could have been a better designer. Brian: Mhmm. Cool. Well, thanks for sharing that. This has been a super great conversation. I really appreciate you sharing these insights about what's going on at Vidant, and some of the nuggets from the Analytics Lifecycle Toolkit. Is there one thing you would give as a takeaway from this conversation, or that you'd like to share with people listening, people in the data science field, analytics leaders, technical product managers. Is there something they should walk away from, that you think is important? Greg: I think the big one for me, and I'm acutely aware of this challenge for myself is, you cannot underestimate the power of collaboration in multi-disciplinary perspectives and getting buy in. That's really what design thinking is all about, is getting that understanding, that empathy. It's really understanding how to walk in someone's shoes. I see so many analysts throughout the world even in Kaggle contests where we want to apply the technical processes and procedures and code without truly thinking through. I do this in my class at Duke. I taught machine learning in the graduate program at the Fuqua School of Business. People view data science oftentimes as a technical journey. That we need to understand the features, and we need to understand what methods are appropriate, and, "Let me see if I can get the model fit, statistics to work," right? Even if you spend just a little bit of time answering some fundamental empathy type questions, even if you don't have an opportunity to interact with those, but just put yourself in the shoes of someone, and try to empathize, "How am I going to use this data? What is my mindset? What am I thinking?” I think when I see teams that do that in my courses—We do a lot of team projects—they're far more successful than the people who view it as a technical exercise. And feature selection is really all about understanding behavior. The other thing that I encourage people to do is learn as much as you can about behavioral economics as possible. Understanding incentives that drive the world around us are essential to being successful in it. Brian: Awesome. There's some great inputs there for furthering your career, if you want to. I would second everything you said here. If there's one thing that I've heard from most of the leaders in the data and analytics space, with regards particularly to data scientists, is finding this “other” missing skill set, which is not the technical skill set. It's understanding the human behavioral piece and really being able to connect the fact that your technical work does have this soft skill stuff. The qualitative aspects here, the learning about human behavior very much has relevance into feature selection, as you said, in building that are actually going to produce some type of useful outcome. So full plus one to everything you just said there. Greg: Well, I appreciate the opportunity to speak with you today, and excited about the work that you're doing. I look forward to developing more evangelists throughout the community at large. Brian: Cool. Well, thank you. How can our listeners follow your work? Are you on Twitter? LinkedIn Where are you at? Greg: Yep. LinkedIn is probably the best place. My Twitter handle is @GregorySNelson. It'll be fun to connect with other like-minded folks. Brian: Great. Awesome. Well, I will definitely put a link there, to both of those places. Again, we've had Greg Nelson here today from Vidant Health, VP of Analytics and Strategy. Thank you for coming on Experiencing Data and sharing your thoughts. Greg: Thanks, Brian. Brian: Cool. Cheers.

Other Episodes

Episode 0

December 03, 2019 00:47:42
Episode Cover

027 - Balancing Your Inner Data Science Nerd While Becoming a Trusted Business Advisor and Strategist with Angela Bassa of iRobot

Angela Bassa is the director of data science and head of data science and machine learning at iRobot, a technology company focused on robotics...

Listen

Episode 0

January 03, 2019 01:00:44
Episode Cover

003 - Mark Madsen (Global Architecture Lead, Teradata Consulting) on the common interests of analytics software architecture and product design

In Episode #003, I talked to Mark Madsen of Teradata on the common interests of analytics software architecture and product design. Mark spent most...

Listen

Episode 0

September 10, 2019 00:42:41
Episode Cover

[

John Purcell has more than 20 years of experience in the technology world. Currently, he’s VP of Products at CloudHealth, a company that helps...

Listen