034 - ML & UX: To Augment or Automate? Plus, Rating Overall Analytics Efficacy with Eric Siegel, Ph.D.

March 10, 2020 00:35:34
034 - ML & UX: To Augment or Automate? Plus, Rating Overall Analytics Efficacy with Eric Siegel, Ph.D.
Experiencing Data with Brian T. O'Neill
034 - ML & UX: To Augment or Automate? Plus, Rating Overall Analytics Efficacy with Eric Siegel, Ph.D.

Mar 10 2020 | 00:35:34

/

Show Notes

Eric Siegel, Ph.D. is founder of the Predictive Analytics World and Deep Learning World conference series, executive editor of “The Predictive Analytics Times,” and author of “Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die.” A former Columbia University professor and host of the Dr. Data Show web series, Siegel is a renowned speaker and educator who has been commissioned for more than 100 keynote addresses across multiple industries. Eric is best known for making the “how” and “why” of predictive analytics (aka machine learning) understandable and captivating to his audiences.

In our chat, we covered:

Resources and Links:

Machine Learning Week

#experiencingdata

PredictiveAnalyticsWorld.com

ThePredictionBook.com

Dr. Data Show

Twitter: @predictanalytic

Quotes from Today’s Episode

“The greatest pitfall that hinders analytics is not to properly plan for its deployment.” — Brian, quoting Eric

“You don’t jump to number crunching. You start [by asking], ‘Hey, how is this thing going to actually improve business?’ “ — Eric

“You can do some preliminary number crunching, but don’t greenlight, trigger, and go ahead with the whole machine learning project until you’ve planned accordingly, and iterated. It’s a collaborative effort to design, target, define scope, and ultimately greenlight and execute on a full-scale machine learning project.” — Eric

“If you’re listening to this interview, it’s your responsibility.” — Eric, commenting on whose job it is to define the business objective of a project.

“Yeah, so in terms of if 10 were the highest potential [score], in the sort of ideal world where it was really being used to its fullest potential, I don’t know, I guess I would give us a score of [listen to find out!]. Is that what Tom [Davenport] gave!?” — Eric, when asked to rate the analytics community on its ability to deliver value with data

“We really need to get past our outputs, and the things that we make, the artifacts and those types of software, whatever it may be, and really try to focus on the downstream outcome, which is sometimes harder to manage, or measure … but ultimately, that’s where the value is created.” — Brian

“Whatever the deployment is, whatever the change from the current champion method, and now this is the challenger method, you don’t have to jump entirely from one to the other. You can incrementally deploy it. So start by saying well, 10 percent of the time we’ll use the new method which is driven by a predictive model, or by a better predictive model, or some kind of change. So in the change in the transition, you sort of do it incrementally, and you mitigate your risk in that way.”— Eric

Transcript

Brian O’Neill: Welcome back to Experiencing Data, This is Brian O’Neill. And today I have Eric Siegel on the line of Predictive Analytics World fame, although you’ve done a bunch of other stuff, and you’ve done so much in the space of predictive analytics. Welcome to the show and tell us about your work in this space with data science.

Eric Siegel: Thank you very much, Brian, and thanks for including me. Yeah, so I’m the founder of the Predictive Analytics World Conference series. And going further back in time, I’m a former academic. I was a computer science professor at Columbia University, focused mostly on machine learning, and have been a consultant since 2003. I wrote a book called Predictive Analytics and the subtitle of the book is actually an informal definition of the field: it’s Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die.  And the updated edition came out 2016. That’s an accessible book for any reader, and used as a textbook at more than 35 universities, but doesn’t read like a textbook. It’s more of an entertaining level, sort of conducive, anecdotally driven, but conceptually complete introduction to the field. And as a consultant, we’ve been running the conference, growing it, it’s been growing steadily for the last 12 years. And I also released a short web series of 10 short episodes about machine learning called The Dr. Data Show, and I’ve been writing op-eds about social justice and other ethical concerns that arise with the deployment of machine learning.

Brian O’Neill: Cool, yeah. And I’ve been enjoying reading a lot of your articles and your opinions on things. And one of the things that struck me in the material is, well I’m going to throw a quote at you, “The greatest pitfall that hinders analytics is not to properly plan for its deployment.” Talk to me about what’s going on here. This ties very much into human-centered design to me, which is when there’s humans in the loop. it all comes down to that last mile, and are people going to engage with our solutions, or are they going to let them hit the floor? For whatever reasons, it’s humans doing things or not. Tell me about that.

Eric Siegel: Yeah, your podcast is focused very much on the human-centered aspect, and that is the first and foremost area of concern with regard to where things can go wrong, which can seem ironic to people at first coming into the field, because you think, “Well, this is a very technical thing, like rocket science, computers that learn from data.” And that core software, predictive modeling methods, algorithms, software, whatever you want to call it, it is quite technical at the center of it. But that technology is not where things most often go wrong. What most often goes wrong is not planning for how you’re going to use those things. You don’t jump to the number crunching, you start with, “Hey, how is this thing going to actually improve business?” And more specifically, render mass scale operations more effective in marketing, fraud detection, process, credit screening, all the things that are done in large numbers, on a large scale at your organization. Most of these things can, and are, improved by the predictive scores provided by a predictive model. That’s what machine learning is also known as predictive modeling. Is it learning from data to create these models? And the models, which is sort of the patterns or formulas, you’ve ascertained from the data, or rather that the computer ascertained, but the whole point is then use those models. And that’s what we mean by deployment or operationalization. Put them into action, actually integrate them into existing operations, mass-scale operations in order to improve those operations. So you’re actually trying to make your business run better, target people with marketing that are more likely to buy, target people for retention offers more likely to leave, spend time with fraud auditors on transactions more likely than average to actually turn out to be fraudulent. Take the risk of providing somebody credit or approving their credit card application on people who are better credit risks or tune their limits accordingly. Insurance, all these decisions, all very much to run them by the numbers means changing the way those day-to-day, moment-to-moment decisions are made. And because you’re changing the current operations, not just putting a new technology in the backend, but its output then actually actively makes change. Then you need to change management. You need a plan, you need buy-in for that plan. You can do some preliminary number crunching, but don’t greenlight, and trigger and do the go ahead with the whole machine learning project until you’ve planned accordingly, iterated. It’s a collaborative effort to sort of design, target, define scope, and ultimately greenlight and execute on a full-scale machine learning project.

Brian O’Neill: Yeah, and do you know if anyone’s like prototyping any of these solutions, because I feel like even for example, for marketing, who’s going to, not re-up their subscription at the end of the month or, these types of predictive scores? I feel like a lot of this is ripe for things where you could prototype with the solution, like, what does the literal delivery of the information look like to the marketing people who would be the people deciding whether or not to send out a snail mailer, or an email campaign, or do nothing. But you could figure out, if we presented something like this, even if it’s the wrong people, it’s like we actually haven’t run the model. We don’t know if these 10,000 people are the ones that are actually going to leave us. But theoretically, if we presented this information to you, what would you do next? I feel like a fair amount of this could be prototyped without doing any data science to figure out where operationally it would fail. What do you think? Is that crazy?

Eric Siegel: When you say prototyping, you just means testing out a new process, or do you mean prototyping—

Brian O’Neill: Yes, I mean the actual, again, the last-mile piece. So because the data science piece ends with some type of output, but the output isn’t the business outcome. So if you could simulate the outputs to some degree, and I know you probably can’t do this with everything, but you could find out, especially if, for example, marketing has never seen something like this type of deliverable, it sounds great on the business to be able to predict, who needs a nurturing, from a marketing perspective, to retain their subscription or whatever. It sounds great, until the actual data lands in your lap, and you’re like, “Oh, well, yeah, this is great, but I actually need to see what were the last 10 touches that they got from direct marketing already? So it’s nice to know that you predict this but I’m realizing that I need all this other information now before I would actually know whether or not to send them a nurture campaign or whatever.” And so all of the sudden, the prediction is right, but they don’t actually have what they need to do their job. And so it falls on the floor, or whatever. I feel like a lot of this could be prototyped to figure out these failure points early, and they may or may not be extra data science work, it may be something else.

Eric Siegel: Right? No, that’s a great point. So I see what you mean now by prototyping: as in sort of try it out. So yeah, without even necessarily creating a model, or a sophisticated model, or one that’s—because there is such a thing as business rules that are made by hand. Sort of good rules of thumb, without necessarily doing a lot of number crunching. So as a first pass, you could say, “Hey, look, let’s make a few rules and pull out this list of prospects that we believe are the most likely to cancel or churn in the next quarter, and then hand that over and see what it means for the salespeople who are going to be answering calls, and on the phone with them, or even managers in offices of a bank. And I know a colleague who did turn modeling and that’s how they deployed it, was that the managers in the bank wanted to know, when somebody walks in the door, one of their customers, this is one of those people flag this more likely to be cancellers. What are they going to do with it? What are their complaints or concerns? Does it end up being actionable? Put that context on it.

So, let me take a step back, though, to put a perspective on this, which is that probably for most of the cases of these sort of business applications I’ve mentioned—so for targeting, marketing, and retention of customers, fraud detection, and credit scoring—a large part of this is where we’re actually automating decisions. So who gets suppressed or included in a contact list for direct marketing? Who gets approved for a credit card application? There’s large swaths of customers for which these decisions are actually end-to-end fully automated. And it’s a matter of sort of integrating that, and figuring out, getting the buy-in, and making it happen.

But your question, and I believe the focus of your work largely, is where those decisions are supported. So they’re informed by these predictions. So for example, a customer service agent sees, oh this customer is calling, and then there’s a few pieces of indicators on the screen that are informed by the prediction. This person is got a big red light next to them, because they’re five times more likely than average to cancel next quarter, something like that. Well, what do they do with that red light? Ultimately, it’s informing them and during the conversation, the course of doing business, the interaction, it’s a human decision informed by a quantitative prediction from the predictive model. That would probably be decision support, rather than decision automation, both kinds of informing decisions is what we call deploying or operationalizing the model, integrating it into existing systems. And in the case of decision support, yeah, there’s that much more sort of human factor considerations that have to go into sort of eking out how well is this going to work when we do go to deployment? How will it actually be used or ignored? Will it seem helpful? Is there pieces of this that are missing? Is there a different way we have to deliver that information visually on the screen for that human?

So that, in that case, the human, the customer service agent, or whomever it is, who’s taking these outputs of the model into consideration to inform their interaction or decisioning, that person is going to need to receive this information in some specific way and act on it. And so yeah, by prototyping, there’s sort of a system to that. Let’s try it out, see if they understand it. How much training do those people need in order to help understand the decisions? And as I mentioned, when you get into these ethical considerations, a perfect example of this is predictive policing. Judges and parole officers use exactly this type of information to inform decisions about how much longer should somebody—should somebody be released on parole now? What kind of sentence should they do? It directly informs and, of course, therefore, will in some cases be the deciding factor for how long convicts stay incarcerated. So it’s obviously very important that that information is acted upon in a sound way.

Brian O’Neill: Yes, and I probably should have qualified that, that I was speaking in the context of augmented human-in-the-loop type of solutions as opposed to fully automated, but this is actually a really good segue to my next question, and that’s, do you feel like organizations should look at an augmented human-based output as a first step? Or is that not necessarily a first step, and going fully automated may be just as viable? Like, do you see those as a continuum, or are they really just separate choices that are nothing to do with each other?

Eric Siegel: Well, no, they’re interactive, there could be a continuum or one as a first step in some cases, but for many applications, they’re separate. And many applications just go straight to decision automation. For example, if I’m targeting my direct mail, I’m just going to select a list based on the scores. And so, I’ve got a million people on my list and I want to pick the best 200,000 to send a postcard to, I’m not going to have a human making decisions for each one, that’s sort of the point of mass marketing. Now just doing that overall mass marketing more effectively, more efficiently, it still may be mostly junk mail, it still may mostly get a very slow response rate, it’s just that it will get a significant enough boost that your ROI can increase by multiple times over, easily. So, in fact this whole issue of making sure that models are used correctly or, taking a step back, that when you plan a machine learning project, you’re planning for that use, that deployment of the model, that integration of how it’s going to be used from the get-go. And that’s sort of the whole management issue, and the planning, the operationalization process, the planning of it from the get-go, is actually the theme of an entire track at our conference. So Predictive Analytics World, which is the conference series I’ve been running since 2009, our largest North American event is in Vegas, May 31 to June 4, including before and after full-day training workshops.

That main two-day conference is actually five conferences in one, we have four different vertical Predictive Analytics World events: business, financial services, health care, and industry 4.0. And then a sister conference Deep Learning World, which is an advanced form of machine learning, deep learning, to type a neural network. And sort of the umbrella of all those five conferences, that more catch-all across all business application areas, is called Predictive Analytics World for Business. That one has three tracks, and one of those three tracks is entirely on project management and operationalization. And just within that one track, we have sessions from Cisco, Federal Express, Google, LinkedIn, Comcast, Xerox, Caesars—this is funny because the conference is in Caesars and we have a presentation from Caesars Entertainment—as well as others including the CIA. And in fact, the chief of analytics of the CIA, Michael Simon, is speaking and the name of his session is “An Argument for Decision Support over Decision Automation.” So again, as my initial answer to your question, yeah, in some cases there is a deliberation. Should we have a human in the loop, or should we start with human loop first. And, you can go online and see the full detailed description of his forthcoming session about where he’s arguing decisions support over decision automation.

But there’s no changing the fact that in many of these applications, you kind of go straight to automation, and having a human in the loop per decision doesn’t scale. And that’s sort of the point of automation in general, is it scales, including the automation of learning from the data. And in this conversation, we’re talking about the automation of using what we’ve learned from data, the deployment of the model, integration of those predictive scores. So, if the concern is, hey, if it’s a first-time deployment, this is the first iteration of this modeling project and business value proposition, how do we mitigate risk and make sure that we don’t suddenly jump to a decisioning process that somehow is faulty or buggy on some level? There’s a lot of ways to mitigate that risk of one way, by the way, is simply to start with more of an incremental deployment. So if you’re doing 100,000 decisions a day, or as I mentioned, marketing, let’s say you’re deciding which ad to display on the website in real-time, each time somebody opens the page, based on the profile of that user or customer, this is obviously also going to be automated without a human in the loop, because it takes point oh one seconds.

In any case, whatever the deployment is, whatever the change from the current champion method, and now this is the challenger method, you don’t have to jump entirely from one to the other, you can incrementally deploy it. So start by saying well, 10 percent of the time we’ll use that the new method which is driven by a predictive model, or by a better predictive model, or some kind of change. So in the change in the transition, you sort of do it incrementally, and you mitigate your risk in that way.

Brian O’Neill: The fact that you have such a large section of the conference focused on operationalization makes me want to ask you a question that I asked Tom Davenport on this show earlier. So if we said ten years ago that, out of a score of one to ten, 1 being the worst, ten being the best, ten years ago, the analytics field was that a 1 in terms of its ability to generate value in the last mile or in the operationalization sense. Where are we now, in 2020? How would you score it?

Eric Siegel: Well, and are you talking about value in terms of its ultimate—

Brian O’Neill: I don’t mean technically correct, alone. It had to create a positive outcome, not just a technically viable solution, but it actually had to be put in production, it had to get used or create business value, however that organizational value defined.

Eric Siegel: Yeah, so in terms of if ten were the highest potential, in the sort of ideal world where it was really being used to its fullest potential, I don’t know, three, three and a half. Is that what Tom said?

Brian O’Neill: You’re a little bit more positive. He gave it about a two to two and a half, but it’s fascinating to me that you’re both within about one, one degree of difference there. So that’s really interesting.

Eric Siegel: Fortunately for me, Tom actually wrote the foreword to my book, Predictive Analytics. And, yeah, the biggest limit to that potential is just that, although almost all large organizations are doing this, and many mid-level, even among the large organizations, even though they’re all using it for some of these main business areas that I’ve mentioned, marketing and fraud detection and such, there’s so many sort of sub-problems within those areas where it could also be used. So it’s about it becoming that much more pervasive. I’d say that’s the main way in which the potential still exists for it to continue to grow.

Brian O’Neill: Mm-hm. Well, you wrote an article in Harvard Business Review, and it had a five-step process for deploying predictive analytics there, and it started with defining the business objective of step one, which sounds logical and we’ve heard that. So, I want to stop right at that stage right there and ask you, whose job is it to define this, particularly if an organization is new to this whole field? I feel like, from my conversations with people, this right here is where things start to break down, particularly if it’s not a standard thing, like who should get the mailer? That’s a very easy thing to understand what it’s going to be. We have a million customers, which 20 percent should we send a mailer to? It’s very easy to understand that and define it. Whose job is it to define this business problem with enough clarity and to build in the fact there’s going to be iteration, like you may find out, oh, we can’t get the training data. Oh, we need to go revisit the problem space. Whose job is it to get that right, when the business may not understand what’s possible? And the data people may be saying, Well, what do you want us to sol—give me a problem. Give me a quant problem to go work on.

Eric Siegel: The responsibility—if you’re listening to this interview, it’s your responsibility.

Brian O’Neill: I like that.

Eric Siegel: That is to say, that there’s no preordained, right? Most ideally, it’s from top-down. So the CEO is super fluent with the concepts and has a mission and the vision. So it’s a sort of top-down, but that’s usually not the case. Maybe it’s a little bit less than that ideal. Whereas you have some data scientist who is very technical and is, and as I am, very much a proponent of the technology and sees the value of it, but is kind of coming from—very focused on the core technology and doesn’t necessarily have a very loud voice, or even a presence in the business-side meetings, where this kind of thing would start to be socialized. So that sort of bottom-up, and so maybe they can convince their manager’s manager. I think that it’s got to come from all directions and it evolves. And ultimately it becomes a collaborative effort. Everyone has to be on the same page. Everyone has to get involved and ramped up to a certain degree, and put in their two cents with regard to where the potential pragmatic considerations and practical constraints in this plan for how it’s going to be deployed. Because there’s not one person who’s so smart they see the whole thing and can be a one-person show, because there’s so many facets that are unseen, within each role in the company. So if I see, “Hey, we can target the marketing that much better.” But then I go talk to the operations manager, who’s a person is running the marketing. They might not be willing to make that big of a change to the way they’re doing the marketing right now. Maybe only a certain amount, and then I might be, well if you change it, you’re profitable, according to the scratch calculations, the bottom line ROI of the marketing campaign will triple. And it might take a fair amount of back and forth and listening to get on the same page. Oh, so here’s sort of the implicit underlying philosophy behind why you still want to market to a larger list even though most people don’t respond. Maybe it has a longer-term advertising effect as a side effect. There’s so many considerations and human factors and things that aren’t necessarily spelled out explicitly. That, therefore, requires iteration, and meetings, and conversations, and sharing of insights.

Brian O’Neill: Got it. I would just emphasize for people listening to this, if you are very technical, that Eric mentioned the word listen here, several times, which implies you’re having conversations about this. And it implies that you didn’t just jump to I will solve your mailing list problem, like your marketing mailing list problem, and you hand them a model and then you walk on to the next project. What I hear you saying is, you don’t get a pass if you’re necessarily technical. You don’t just get a pass when you get a vague question like, “We would like to add machine learning to our product. Can you help?” [laughing]. There’s an unpacking that needs to happen there. Because I’ve had some people feel like, “Well, come back to us when you figure out where you want it in the product and what you want it to do.” And then there’s a camp of people that feel like, “Well, no, the data scientists understand what might be possible. And even though they’re not responsible for the line of business directly, they should be handling that negotiation, asking the right questions to help the business person arrive at where might it make sense to you to use machine learning so that we’re not just, doing technical exercises and rehearsals?” Is that your [inaudible 00:25:47] summary?

Eric Siegel: Yeah, no, exactly, right. So the way you just put it is perfect reasoning for why it requires meetings. A bunch of meetings. [laughing]

Brian O’Neill: Yes, and so, I think as long as people are focused on the outcome, it’s so much about the outcome. And I talk about this a lot on this show with design, we really need to get past our outputs, and the things that we make, the artifacts and those types of—the software, whatever it may be, and really try to focus on the downstream outcome, which is sometimes harder to manage, or measure I should say, but ultimately, that’s where the value is created, right?

Eric Siegel: Right. it’d be a lot more fun if I put on my science hat and all I care about is how cool this underlying technology, and believe me it is. The idea of learning from data, and drawing generalizations that hold in general, is quite fascinating scientifically, and the methods are cool as heck. So with that hat on, it would be a lot easier if I could go off into my cubicle alone, and never meet with anyone, and just do the number crunching, and create magic. So to put the whole argument in one new way, going back to what you said a second ago was, somebody might say, “Hey, could you add some machine learning to this particular system?” But it’s not like some technology you plug into your website, and now your website runs 25 percent faster. It’s a core technology you’re leveraging, but overall it’s not a technical endeavor, it’s an organizational endeavor. You’re making a change to operations. So it’s a multifaceted cross-enterprise collaboration.

Brian O’Neill: Mm-hm. Do you have any anecdotes or experiences that come to mind around model interpretability, and, having to really give consideration to the way a prediction was presented to the customer, and how that may or may not have affected the success, the outcome, or the value creation?

Eric Siegel: That’s a great question. It kind of depends on the context. Who are the consumers of the score? Should it just be a red light, a yellow light, or green light: what are the chances this customer is going to head off in the wrong direction? Or, what are the chances they’re going to be responsive to this particular product offering? Should we keep it that simple? Or is it a different class of VP of Sales type people, who need more innuendo, or might be served by it, because you could go either way. When you serve up the probability, which ultimately it is, and you could put thresholds and then put colors red, green, and yellow on it. Or you could say, “Hey, here’s the actual specific number, and you could show it on a scale from 0 to 100. And either way, you could say, “Well, here are the main factors that influenced it.” So the fact that this customer has been with us for more than seven years might be a main determining factor. There’s lots of factors like that about each individual, and there are ways to sort of reverse engineer this predictive model and try to create explainability around the resulting score for each individual score. And whether that’s useful just depends on the use-case scenario. It’s often very much desired, and it certainly can’t hurt to explore the potential use of that.

Brian O’Neill: Yeah, I wonder if that, again, even if it’s primarily because it encourages engagement with the prediction that’s being made. So, even though it may not improve the prediction, per se, technically, it is what unlocks the outcome, because it makes someone pull the trigger and say, “Okay, I’m going to go ahead and grant this loan, or whatever it may be. I mean, some of that, probably there’s legal reasons why the interpretability needs to be present. But, I feel like, again, if you think about how is the consumer, and a human—the user and a human-centered, human-in-the-loop type of situation, that’s an important element to make sure that they’re willing to make a decision that ultimately rides on them, as opposed to an automation kind of situation.

Eric Siegel: Yeah, absolutely.

Brian O’Neill: Yeah. Wrapping up here quickly, in a moment, but I wanted to ask you, since you since you’ve been in the field for such a long time, was there anything you would change if you could just rewind ten years, right now, with your career and the work that you’ve done in this space? So one big thing you would like, “Ah, I would have done this differently?”

Eric Siegel: Oh. Ten years? That’s not very long anymore, actually.

Brian O’Neill: Twenty.

Eric Siegel: Oh, well. [laughing] Twenty. I’m really glad I have a PhD, and I feel like it helped me a lot as far as being a disciplined person and the ability to think abstractly. But as far as where I ended up going, a PhD, technically, is training to do research and development, or to be an academic. So I didn’t end up pursuing that after—I mean I was only on the faculty for three years full-time. Then I became an entrepreneur, a couple startups. And now since 2003, I’ve been an industry consultant. So it took me six years to do that. So I could have sort of gotten a Masters instead. I don’t know, I’ve never really sort of felt regret about it. I don’t know if I have a great answer to your question.

Brian O’Neill: That’s okay. Some people, sometimes people have a—

Eric Siegel: Well, I would have, Predictive Analytics World is now—we have an umbrella event called Machine Learning Week because now the term “machine learning” is the term. When I started as a consultant in 2003, machine learning was strictly a research and development, academic research, term. And the industry word, it was extremely arcane. Predictive ana—people were calling it data mining. But I was like, “that’s a silly term,” for a lot of reasons. But predictive analytics was the new term. I was like, “Well, that makes sense. It’s at least like a sound term that refers to what we’re doing.” And it was good to go with that term. But, it would have been helpful to know, with a little more foresight, that machine learning was actually going to become the relevant term for my field. My next book will be called Machine Learning something instead of Predictive Analytics. But predictive analytics is still a pertinent term. It’s just machine learning, really has taken over as the term. Unfortunately, artificial intelligence has also taken over as a term, and that term is very fuzzy and can kind of mean whatever you want.

Brian O’Neill: Yeah, it’s hard to change these words, sometimes, once they get out there. So [laughing] you got to go with the tide, sometimes, that’s where it wants to go. That’s the language people speak, I guess. Cool. Well, this has been a great conversation. I appreciate you coming on Experiencing Data. And I’m just curious, do you have any closing advice for data product managers or analytics leaders, data scientists? What would you leave them with?

Eric Siegel: Well, I’d say that the more you learn about the field, on both the operational business leadership side and the core technology under the hood side, the better. And whichever those two sides you’re on, learn about the other more than you think you need. If you’re on the more business side, know the fact that the core technology, the machine learning methods, decision trees, log-linear regression, neural networks, they’re not nearly as difficult to understand the basic intuition as you may think. It’s quite interesting. Don’t be intimidated by it. It’s good to have a sense of it. And definitely a sense of what the data preparation entails. That’s the main technical hands-on bottleneck and challenge. Ironically, it’s not the rocket science part, it’s just getting the data into the right form and format and then more generally, there are so many different roles along the lines of what we’ve been discussing today, so many ways you can be involved, both on the technical side and in terms of project management and being involved in operational deployment, people that are consuming the scores, and determining how that’s going to work and integrate with existing processes. There’s so many different roles and parts to play. It’s not just a number-crunching person in the corner doing it. It’s a organizational effort and requires lots of different participants. So, look at it holistically and figure out which part of it might be most interesting to you. Because, chances are, there is an opportunity for you to get involved.

Brian O’Neill: Cool, thank you. That’s great closing advice, and where can people follow you, is LinkedIn, website, social media, how can they follow up?

Eric Siegel: Oh, well, you can go to our conference website, predictiveanalyticsworld.com. You can go to my book’s website, thepredictionbook.com. And you can see my ten-episode Dr. Data Show, drdatashow.com.

Brian O’Neill: Awesome. Cool, I will definitely link those up in the show notes and Eric, thanks for coming on Experiencing Data, it’s been great to chat with you.

Eric Siegel: Yeah. Thanks for having me.

Brian O’Neill: Definitely. All right, cheers.

Other Episodes

Episode 0

January 15, 2019 00:41:22
Episode Cover

004 - Vinay Seth Mohta (CEO, Manifold) on Lean AI and machine learning for enterprise data products

Vinay Seth Mohta is Managing Director at Manifold, an artificial intelligence engineering services firm with offices in Boston and Silicon Valley. Vinay has helped...

Listen

Episode 0

August 27, 2019 00:45:03
Episode Cover

[

Ahmer Inam considers himself an evangelist of data science who’s been “doing data science since before it was called data science. With more than...

Listen

Episode 0

May 07, 2019 00:42:33
Episode Cover

012 - Dr. Andrey Sharapov (Data Scientist, Lidl) on explainable AI and demystifying predictions from machine learning models for better user experience

Dr. Andrey Sharapov is a senior data scientist and machine learning engineer at Lidl. He is currently working on various projects related to machine...

Listen