Jana Eggers, a self-proclaimed math and computer nerd, is CEO of Nara Logics, a company that helps organizations use AI to eliminate data silos and unlock the full value of their data, delivering predictive personalized experiences to their customers along the way. The company leverages the latest neuroscience research to model data the same way our brains do. Jana also serves on Fannie Mae’s digital advisory board, which is tasked with finding affordable housing solutions across the United States. Prior to joining Nara Logics, Jana wore many different hats, serving as CEO of Spreadshirt, and General Manager of QuickBase at Intuit, among other positions. She also knows about good restaurants in PDX!
In today’s episode, Jana and I explore her approaches to using AI to help enterprises make interesting and useful predictions that drive better business outcomes and improve customer experience. In addition to discussing how AI can help strengthen personalization and support smarter decision making, we also covered:
“We have a platform that is really built for decision support. How do you go from having […]20 to having about 500 to 2,000 decision factors coming in? Once we get that overload of information, our tool is used to help people with those decisions. And yes, we’re using a different approach than the traditional neural net, which is what deep learning is based on. While we use that in our tool, we’re more on the cognitive side. […]I’ve got a lot of different signals coming in, how do I understand how those signals relate to each other and then make decisions based on that?” — Jana
“One of the things that we do that also stands us apart is that our AI is transparent—meaning that when we provide an answer, we also give the reasons why that is the right answer for this context. We think it is important to know what was taken into account and what factors weigh more heavily in this context than other contexts.” — Jana
“It is extremely unusual—and I can even say that I’ve never really seen it—that people just say, Okay, I trust the machine. I’m comfortable with that. It knows more than me. That’s really unusual. The only time I’ve seen that is when you’re really doing something new and no one there has any idea what it should be.” — Jana
“With regards to tech answering “why,” I’ve worked on several monitoring and analytics applications in the IT space. When doing root cause analysis, we came up with this idea of referring to monitored objects as being abnormally critical and normally critical. Because at certain times of day, you might be running a backup job and so the IO is going crazy, and maybe the latency is higher. But the IO is supposed to be that way at that time. So how do you knock down that signal and not throw up all the red flags and light up the dashboard when it’s supposed to be operating that way? Answering “why” is difficult. ” — Brian
“We’ve got lipstick, we’ve got kissing. I’m going to get flagged as ‘parental advisory’ on this episode in iTunes probably. ;-)” — Brian
“You can’t just live in the closet and do your math and hope that everyone is going to see the value of it. Anytime we’re building these complex tools and services —what I call human-in-the-loop applications–you’re probably going to have to go engage with other humans, whether it’s customers or your teammates or whatever.” — Brian
Brian: Jana Eggers is the CEO of Nara Logics and she’s also a self-proclaimed math and computer nerd who took the business path. We had a fascinating conversation about what she’s doing with AI at her company and how she’s helping enterprise businesses move the needle with their ability to take data and make interesting and useful predictions to better drive business value and customer experience.
We’re going to talk a lot about explainable AI in this and how powerful it is to provide why’s behind predictions when you’re delivering those, especially to business stakeholders. We’re going to talk about how some of her technology allows companies to do what if simulations, being able to remove features from predictions and re-run them in real time. There’s a ton of great information in this conversation with Jana so I hope you enjoy it.
Welcome back to Experiencing Data. I’m happy to have Jana Eggers on the line from Nara Logics. Actually is it Nara Logics or Nara Logics? How do you guys like to say it?
Jana: That’s a big one. We get Nara all the time, but we do pronounce it Nara. And we also get logistics instead of logics on a regular basis. So it is Nara Logics, but we’re okay anyway.
Brian: Excellent. Well you are the CEO of this company. You guys work in the AI space. We met at what, I think it was the IA Symposium, and had a nice dinner there and I enjoyed our conversation and wanted to bring you on the show a little bit to talk about what you guys are doing at Nara Logics, but also about how we are bridging the gap between some of these technologies that are in place and also how end users are experiencing them. How do we make all this technology as easy to use as possible so that business value is actually created?
Could you tell us a little bit just about … You guys are using some interesting technology to make machine learning, as I understand it, predictive intelligence to improve the quality of these outcomes modeled based on how our brains work. Is that correct?
Jana: Yes. Quickly, we have a platform that is really built for decision support. And decision support can be used in two ways. One is in front of an end user. So usually that’s considered personalization, so making sure the information either for their job or for an end consumer shows up when they need it. Everything from product recommendations to tips on how to use the product that they’re doing.
And then it’s also for decision support. So you’re running a large factory, you’re now adding many sensors into your process. How do you go from having decision factors around 20 to having about 500 to 2,000 decision factors coming in? And once we get that overload of information our tool is used to help people with those decisions. And yes, we’re using a different approach than traditional neural net, which is what deep learning is based on. While we use that in our tool, we’re more on the cognitive side. So that idea of okay I’ve got a lot of different signals coming in, how do I understand how those signals relate to each other and then make decisions based on that?
Brian: Yeah, I was reading something on your website and I was wondering if you could talk a little bit more about it. And maybe I misunderstood it because my background is not on the math side and the engineering side, either of those. But it sounded like your technology is able to somehow look at recency in a different way, that it’s able to adjust its predictions based on maybe information that’s happening recently as opposed to you know you picture this little information nugget going in the beginning of the funnel and it goes through the same funnel and spits out in the other end with some data in the funnel and then the outcome at the other end. I’m kind of picturing oh in this case sometimes maybe it drops in the funnel halfway down already, or maybe that’s not the right analogy. But could you talk a little bit about that?
Jana: No, I love that you’re picking up on that. We were just having a conversation with one of our customers early this morning because they’re based in Europe. We were talking about just this and we were talking about how things change with the context. And we had shown them basically some sensitivity analysis around that, and the immediate contact which is really within the minutes and hours right around when they were making a decision. They said, but wait a minute, you’re learning from two years of data, why are you emphasizing so much what’s happening right now? And we said, well that’s what we’re showing you is that the context of what’s happening now is really impacting what the decision would be if you were really considering all two years and you do want to still learn from that data, but you have to put those two years in the context of what’s actually happening right now. And it’s those things that the human brain does really well, which is wait a second, I need to pull up the information that’s related to this, but particularly related to this context.
So I think you summarized it well. Sometimes if you think about where it drops in, it may feel like it drops in the middle of the funnel, and the reason for that is you’ve already collected data from those prior steps in the funnel that you don’t have to recollect those. And so you’re using that to basically pull in the right information at the right time. That’s what our secret sauce is, is how to quickly pull together the different pieces of information. In the brain it’s often called chunking. So to pull the right chunks together for my context to give me a better result.
Brian: Interesting. I’ve used the chunking method in teaching music before. Can you explain, maybe give us a concrete example of how … So first of all, my assumption is to an end customer, if we were talking about from an experience standpoint, you probably don’t notice any of this. This is probably down in the weeds, right? You’re not supposed to necessarily have to know all of this is going on. Is that a safe assumption first of all or not necessarily?
Jana: Well, we would say no in general because one of the things that we do that also stands us apart is that our AI is transparent. Meaning that when we provide an answer, we also give the reasons why that is the right answer for this context. We think it is important to know, you don’t necessarily know what calculation has to happen, but really to know what was taken into account and what factors weigh more heavily in this context than other contexts.
So kind of yes and kind of no to answer your question. We’re not meaning to confuse people, but we do want to point them in the right direction so they have confidence in the answers that are being given.
Brian: Yeah. I actually wanted to jump into kind of explainable AI space. We were emailing about that. You may have answered my question already. I guess what I was thinking of when you were talking about things like product recommendations, there’s probably a level of detail that you have to choose to surface based on who your customer is, right? I’m guessing if you’re recommending a lipstick color to someone that maybe the explanation of why isn’t as in depth as it might be for your factory floor. So is this something where you can kind of determine the amount of explainability that is provided with any of the recommendations that come back, kind of based on what’s needed by that particular user?
Jana: Clearly Brian you don’t wear lipstick.
Brian: I’m not wearing lipstick at this time, nor have I ever consciously worn it or put it on myself. My wife has put it on my cheek before maybe.
Jana: That is an important decision. And yes we may not get into the life safety issues with lipstick, although sometimes it can feel that way with lipstick. But that said, yes to your point, you don’t do all of the factors but you want to see the major factors. And honestly, even with lipstick wanting to see why this color is the right one right now, which may be different in the summer, it may be different based on the outfit that you’re wearing, that can actually sway your decision too. It’s like, oh I didn’t realize that that color and that color actually would clash, and it was something I hadn’t paid attention to or I hadn’t paid attention that my skin is getting more tan over the summer or something like that.
So I do think that oftentimes even those little hints that are the drivers can help us a lot. I think on the factor floor, obviously if you’re talking about something like 2,000 factors yes you want to roll them up, and you also want to chunk them. And as you said you do chunking in the music. So that does help people, the granularity of what you show. But then also giving them the ability to drill down.
So another example is that we do work in the federal government space, and which equipment something is measured from matters. But you don’t have to say, well it was this camera that’s installed on this equipment and it has this lens capability. At first you just say it’s the satellite, and this is the one that’s reading that. And anybody that has questions and they want to make sure that well, this satellite actually has the resolution that I need to make this type of decision, you can drill down on that and make sure that that’s there if you have questions. Otherwise if you don’t, you trust that the machine has that.
And some people have that kind of knowledge to drill down, and other people don’t. You wouldn’t, by the way, if they were making this decision without the recommendation.
Brian: One of the things that’s interesting to me here is when you talked about the … I’m curious about the before after experience, like when we talk about this factory floor example where I’m picturing the old way was maybe I use my hunches and guesses from being on the floor a long time, coupled with maybe you have some telemetry coming off the hardware and it was thrown into Tableau or something and you can see how many times did the machine stamp the metal in the last hour, and is it higher or lower than it used to be? And then you kind of put together all this stuff in your head and you try to deduce should I change it to seven or leave it at eight? Change the knob.
And now the new way is, you feed all this stuff into your model and it creates a predictive output. So what is that before and after like, especially when you go from something where the number of factors you’re considering is maybe in the teens or something and you jump up to 2,000. For the customer, both the business consumer but also the line worker or whoever the person would be, maybe you could tell us a little about who the actual receiver of the intelligence is that you guys provide. But what is that before after like and what’s the problem you have to solve for them to get right in order for them either to … Is just providing explainability, is that how you build the trust there? Is there a bigger gap that you have to close with the technology that goes beyond “Just have faith that it works. It’s really smart.” What’s that before after like?
Jana: That is a great question. It is extremely unusual and I can even say that I’ve never really seen it that people just say, people that have been doing this job for awhile, to your point someone that has had to have been making this decision in the past, that they will just say, “Okay I trust the machine. I’m comfortable with that. It knows more than me.” That’s really unusual. The only time I’ve seen that is when you’re really doing something new and no one there has any idea what it should be. And honestly, in those cases we really push to how do we define a gold stat? What are we measuring against? So those types of situations, even then we look to see how we can build metrics.
But let’s go to the more normal case, which is people have been making these decisions, and how do they decide that the machine is actually taking into account more than they can take into account? And part of it is our transparency, so those why reasons, definitely have an impact. And they’ll look at them and say, oh but it shouldn’t be counting that because I know that sensor is offline. It can be that simple of the sensor is out of whack. And what’s great is we actually give the ability for someone to say, oh that why reason, ignore that. And then they can re-process and see if that impacts what the decision support that we’re giving, what options they have.
So that ability to interact and drill down, like I said before. So if somebody looks at that and says, I don’t think that satellite is capable of that level of reading that’s needed for the accuracy of that decision, we can back up and say, okay let’s drill down on that and let’s see. And it’s not unusual that the person says, oh I didn’t realize that one had that. And so that ability to allow them to apply their knowledge, but to see the data that’s represented, is really a big point to them saying, okay I’m starting to trust this. And that’s the direction that we’re generally going.
So it usually is someone that’s making this decision now and you ask who is it. Who is making this decision now, and typically those people are already struggling with how do I incorporate this data? And what’s happening most often, I’ll say this one thing, sorry I know I’m going on too long, but-
Brian: No, please.
Jana: What’s happening most of the time is that each of these different new streams of information are starting to be analyzed on their own. And the problem with that is that the whole is really greater than the sum of the parts. And so if you’re just doing an analysis on this new flow sensor that’s coming in, and what you’re doing is you’re noticing when that flow gets disrupted, well that can give you a lot of false positives as an example. Yeah, it’s disrupted, but we already knew that because we turned this down over here and that’s actually as effective. So we need to be looking at these systems as being very interconnected systems, which is what our system does is it’s bringing together different streams, oftentimes at different granularity, different time rates, different types of information. How do you marry all of that to give the context of what’s happening now so that I know that the flow going below normal in that sensor is perfectly fine because of the other things that are happening around it?
Brian: Right. I worked on several applications in the IT space in this and we would refer to the root cause analysis, and so we kind of came up with this idea of referring to things as abnormally critical and normally critical because at certain times of day you might be running a backup job and so the IO is going crazy, and maybe the latency is higher, but it’s supposed to be that way. So how do you knock down that signal and not throw up all the red flags, light up the dashboard, when it’s supposed to be that way. So learning these things so that the real signal can come out, because otherwise you’re back to throwing all this stuff in front of the customer and then they have to go through and try to figure out which signal is actually meaningful and crawl through all that. Is it kind of like that?
Jana: Exactly. And the other big problem that people have is these sensors are new so they don’t have two years’ worth of data to train a traditional neuro net on. And so that’s another thing that we deal with often is hey, in some cases I have three years’ worth of data running off this sensor, running off transactions that I have, whatever it is. And in other cases its brand new and I have less than three days. And so how do I account and balance for that? That’s another place that we’re inspired by the brain because our brain is actually really good at that. We could argue about how good it is, but we’re often looking for huh, I’ve got this new type of information that I’m getting in, how do I balance that with all of my knowledge of something? How do I learn to play music when I’ve never played music before, but I have listened to music. There’s some analogies that I can draw from that and that’s the type of thing that we’re trying to do with our system is figuring out the balance that we have and can leverage for our system.
Brian: Is it ever difficult for the end user of these services to understand the explainability that’s being provided with the predictions or are they a part of creating the why did it do this, here’s eight bullets on why it did it? Are they a part of the language and the presentation of what comes back, such that it, like especially if you were talking to someone that had internal knowledge, the factory worker who has done this for 30 years and has a lot of knowledge in their head about these systems, are they part of that design experience so that it looks familiar to them when it comes back? Or is it something where you have to iterate through to figure out if they can understand the signals that are coming back?
Jana: It’s some of both. They’re absolutely involved and it’s an iterative process. If thinking of it as a normal management product process where you’re getting a prototype out there, you’re showing it to people, you’re talking to them about what … One of the things we do is work very hard to get things into what we call our connectome, which is roughly our knowledge graph. I’m going to use that term. I always hate to because there’s also things that are very different from a traditional knowledge graph. But get our knowledge graph going very quickly and then be able to show them some of the answers so that they can react to it. Because until you see it it’s really hard to create it in your mind.
So that is absolutely an iterative process that we work on and think of it more of starting with an MBP and iterating from there, and giving them the tools of flexibility. Our platform has the ability for not end users but systems users to define that. That kind of language and what levels. And we’re still building out. We’re also a startup, so we’re still building out functionality. But it’s a great question because we’ve learned from customers that that is absolutely necessary, both granularity as well as the ability to iterate to provide more clarity based on the particular problem that we’re solving.
Brian: Is there a particular anecdote or story? You said you’ve definitely learned something there. Can you share one of these learnings that you’ve had, or maybe you’ve displayed something that came across strangely or was misunderstood or maybe it caused a problem and you learned something from that experience and how you changed it. Is there anything that pops in your mind?
Jana: Well, one thing that’s quick and easy for everyone to understand. Procter & Gamble is very public about their work with us and we’re grateful for that. We work with them on several things, but one of them is the Olay Skin Advisor. And there was lots of discussions about the why’s. Do we show her, because it’s normally a female user, do we show her answers that she gave us? She told us that she likes things feeling light as air on her skin. And the question was, well she knows that she told us that, so why are we giving that as a why? So there was an internal debate about that, not just internal with us but with Procter & Gamble and the Olay team. One of the things that came out clearly from customers, they actually liked being given even answers that they directly gave because they knew that the machine took it into account.
Brian: Right. How did they find that?
Jana: We were testing the why’s. So we were actually testing what showed up for customers and at what level. And it wasn’t that every answer showed up as a why for everything, because some of the answers didn’t impact everything that we were showing. But we were surprised that it wasn’t just what’s considered an inferred connection, so something that we step to from the answers that we’re given. But we were surprised at how many people said I know I gave you that answer, but I like seeing where it applies. And so that was very interesting to us of, when you talk about granularity, a lot of people think well show me the thing that I didn’t know of. And I think one of the things that is important and that we learned with that is that we verify the things that the customer thinks should be there are there.
Brian: Is that typically a space that you know that you need to go every time you guys take on a new client or new project, that that’s part of your standard process is that there’s going to be this gap and we have to understand what needs to be there and why it needs to be there? Is that just the standard part of the process?
Jana: Yes. Which is why we make it easy to change.
Brian: Right. You mentioned this a few times so I wanted to unpack this a little bit. The intelligence behind the systems that you guys use, is it manifested into a software application that you guys then customize for each customer, or are you more providing API hooks and then the client is responsible, or can pull in that information however they want into whatever tool? Help me picture, help us picture how it’s actually experienced by either … Obviously the Olay Skin Advisor, that’s probably through a web interface on a public website, right? But maybe the factory, like what would the factory experience be like? Who’s looking at what in order to receive this information that’s coming from you guys?
Jana: It’s the latter rather than the former. We are an API based platform. In that way think of us like Trulia. They do the communication for most of the applications that you use and they do that by just calling their API for click to talk or something like that from within an application or text or whatever it is.
Just like with Olay Skin Advisor, we didn’t build the Skin Advisor. We just built the intelligence behind the Skin Advisor. And the same thing on the factory system. They already had a dashboard. They’re not going to completely replace that dashboard. They’re not going to make their plant manager sign into another solution. What they’re going to do now is put another panel in that dashboard, probably replacing a few others, that now says hey, here’s the top three things you should pay attention to. So that’s normal.
Now we do have some basic interfaces that we definitely have some people use. Usually that’s kind of an interim like, wow we did this proof of concept and the proof of concept went so well that we want to start using it now. And there’s seven people that are going to use this, not 800 or 1,000 or millions as in the case of Olay. And for that we’re okay with them logging directly into this system, getting some answers, and going. Like I said, usually that’s something that’s six months to a year in terms of timeline that they would do that, and then while they’re getting it integrated into their system.
Brian: You mentioned how you can turn off some of the why’s that came back and re-run. Does it go beyond turning on and off the signals, or is there any type of capability for what if analysis? Like clearly I need to change the flow rate, but should it be 82 or should it be 87 and what’s the impact going to be? Does this allow you to run those predictions as well with more than just on off?
Jana: Yes. How cool is that?
Brian: That’s pretty neat. How does the-
Jana: And you can do AD testing on our platform, so even things like if you want to just wait, hey I think this is going to be more important or less important and I just want to check out something with weight. Or something like, I even want to test the why’s with end consumers, see which one it sent someone to even buy more. We actually have the capability. The capability to do AB testing, I want to be clear, we also easily integrated what you already had built into your product, which is not unusual too, particularly around consumer products if you already have, there will be tests we built in, we fit in right with that. Because it’s just another API call in. So with the API call I say which one that I’m going to, and then those different results can be measured.
So we built that in because we also have found that a lot of customers, again if they’re just starting out with prototyping with AI, it’s still new. And so while we didn’t build out a full test suite the way that an Adobe has, we do have simple functionality to allow people to do some more what ifs. And then we have that real time capability, again to your point, if you want to completely say, okay what if my reality is very different than I think it is? How would it change an answer, and do comparisons that way.
Brian: Is there any particular risks or challenges in terms of the integration part between, not so much the technical ability to access the capabilities of the API, but in terms of the making sure that there’s actually a positive outcome, there’s business value created, there’s a good experience? Is there any places where that maybe doesn’t go so well between either, I’m assuming it’s an internal engineering group or product group that would be utilizing the APIs, or is it a third party engineering company or something like that that’s providing the actual application that’s going to access your APIs? I’m just curious about that integration and if it’s ever a challenge.
Jana: Most of the time people are already working. Either they have their own internal resources. Like I said, a factory dashboard is usually built by someone. It’s either built by their internal resources or they do have an outsourced firm. Procter & Gamble, I know they released this publicly, sometimes they work with agencies, sometimes they work with outsourced development and they have their own in source development group. It really depends on the project and the customer and kind of where they are. But most people don’t have a … This isn’t a new thing. They were already doing some of this, not with AI but with statistical approaches.
And I want to be fair, we’re working with very large companies. So none of these are small or midsize companies that maybe don’t have that access right now. These are really very large companies that this has been pretty much a standard to them. I think we’re seeing more of that. This is why Sales Force paid huge amount, huge numbers to somebody like MuleSoft. Companies are realizing I have to have some of these resources available, again whether in source or outsource, to move my data around so that I can really leverage the value of that data.
Brian: Are you often selling into a data science group, and is it something where maybe they don’t have the capacity to do what you do or the way you do it? Or is it more a business group pulls you in because they don’t have internal data science? You’re probably going to say it’s a mix of those, but I’m just curious how much the internal data team beyond being a data provider to you in terms of the science and intelligence part, how do you guys work together? Ad who picks up the phone?
Jana: It is a mix as to who is actually heavily involved in the project. But it’s not a mix. Every single company we work with has data scientists, they have some form of engineers, whether they’re systems engineers or software engineers, that varies. This is really what is considered a multi-buyer complex sell because I’ve got a technical sell, I’ve got an end user sell, and that end user can be a literal end user or a data science person, and I’ve got my business side and financial. How much is this valued in the company, and how much does it cost? We’ve got all of those aspects playing.
Usually if data science is heavily involved it’s because we offer a capability that they don’t want to spend the time. This isn’t something that as a company they want to value to do. So I’ll give you a direct example, and again it’s because Procter & Gamble has been public about this. We have other customers that are very similar. The AI that was developed for Olay Skin Advisor that does image recognition and actually looks and determines my skin age, and then it looks and says, okay for that skin age her crow’s feet are great or they’re bad. And so that was something that was very close to what Procter & Gamble values as part of their contribution. They need to understand skin really, really well. And they did go and look to see if there were parties that would be doing that type of thing that did it as well as they could themselves. And it was a machine learning and data sciences group that developed that visual ID system.
And then they said, okay and now for pulling together all of the signals that we have about our products and product grouping and things like that, and to put that in the context of this user, that’s a decision supporter or more targeted recommendation engine. And they went and looked at recommendation engines and ours had flexibility and capability, including the why’s, that they didn’t find in others, which is why they used it. But they said, we don’t feel like that kind of recognition capability is something that we need to own and develop ourselves.
So that is a really good example of something that happens on a regular basis. Let’s back back out and talk about in the plant production process engineering space where we have several customers. Often time if they’re dealing with raw materials or chemicals that they work with, they have some specialty in that. And so the analysis of the quality of what’s coming out in different parts and stages, that analysis is very important to them and they spend a lot of resources on their data science to do that. Now, the analysis of how do I bring together all of those signals and react to that in context, is not something often that they have capability of doing or feel like they need to build out that quality specifically for them. So that again, it’s a different use case, but it really is the same type of decision that they’re making which is a) I don’t need to build it myself and b) I don’t have a reason to build it myself, so why wouldn’t I use a tool for that?
Brian: I see. For example in that case where you have these other data science teams involved, I would imagine they’re probably heavily part of your sales process and talking to the business about the value of these things, or is that not necessarily true?
Jana: Again, it depends on the group. Sometimes data science is like look, I’ve got way too much to do, there’s no way I can help you that. So they’ll come in and usually do some tests, like okay do I believe what they’re saying? They’ll come in and help somebody analyze it or something like that, but they’re not as heavily involved in the definition of what’s trying to be accomplished. That’s usually on the business side. But they’ll do some of the vetting and questioning and poking at what we’re doing. And sometimes through that they’re like, hey I think I could use this for this other problem that I have, can we talk about that?
Brian: Is having the why’s and the explainability capability in your platform, is that a high, medium or low driver do you think of your sales and your success? How much is that important to the client, or are they more just … Is that like, I didn’t know you could do that, that’s a nice, it’s like gravy. They weren’t thinking it was going to be there, but it was a nice delighter. Can you talk about the importance of that? Or not.
Jana: Think of it as book ends. It absolutely in the front end phase, it’s something that people worry about as far as explainability. In attraction, people are usually like, okay we’ve got to try this ML thing and I like the fact that you’re talking about explainability. And other people that I talk to aren’t. Or they talk about it, but they don’t really have a plausible answer for how they’re going to get there. So that’s at the very front end.
In the middle, it becomes a lot less important because all they care about is the result. So if we can’t get better results, even if we have explainability, it doesn’t matter at all. So the middle chunk, the books between the book ends, it is highly unusual that that’s the reason.
Then at the end, where they’re like, okay you can give me better results, this really matters, then they focus again on the why’s. The duration of the why’s is not as much as the duration of getting to the right results, but actually it’s the beginning and the end of the project where they’re key. We have three things that we taught most in how to differentiate your customers first is we can produce better results. They love that, they think it’s great, and the context is a big differentiator for them. Second we have the transparency and explainability so that they can literally see everything that happens if they want to drill down that much, but they also have that more rolled up view of explainability. And then the third thing is that quick and easy way to add on new data and not have to wait five years, for five years of it to be significant. So that ease of integration and now I want to add this new sensor, or I want to upgrade this sensor, and when I upgrade it by the way it has different capabilities than what it was before. That ability is the third thing that our customers come to us for.
Those are really the three key differentiators and reasons why people choose us, and it’s really in that order even though, like I said, most people start with us because they know that they’ll need the explainability. So it usually comes up in the beginning, then it goes away for awhile, then it comes back at the end.
Brian: Got it. How is it, just a little tangent here but I’m curious, what is the experience like for when you talked about being able to add a new factor or measurement that’s going to go into the system and be part of the decisions that come out … I’m picturing, for example, the Hershey Kiss. It’s being made in a factory. There’s the chocolate, you’ve got the little chrome, the tin wrapper, and you’ve got the little piece of white paper. So let’s say hey, for the first time we’re installing a camera at the phase where the little white piece of paper goes into the Kiss. We’ve never had that before and we want to make sure that the paper sticks right up, straight up to the sky. That’s how it should be and we’ve never been able to measure that.
So they install a camera in. Now what? How much are you guys in the loop? What’s the capability of being able to add a measurement like that? Or maybe you can give us a more realistic example if that’s a bad one. I always like visual ones.
Jana: No I like it, because I like thinking about the Hershey’s Kisses.
Brian: We’ve got lipstick, we’ve got kissing. I’m going to get flagged as parental advisory on this episode in iTunes probably.
Jana: I know, and shame on me for not getting that. That was a very good link. It’s a great question. And something we do tell customers. How much we have to be involved, it really depends on the customer. We have some customers, I mentioned we work with government, we have some government customers that we can’t even look at the data. We can’t know that they’re interested in that piece of paper sticking straight up. So what happens then is they’ll often come to us and say, hey I have this analogy, and they’re used to that because they can’t often even talk to their colleagues about what they’re doing because they have this need to know on so much national security information. So they’re used to doing that translation and saying, if I were thinking of this, what would I do? And then they can go do that all themselves.
We have other customers that are right in between where it’s like they can do it themselves, but they like the backup of us saying, oh look at this or consider this or think about it this way. And then whether we implement it or they do, it really depends on how intense it is. And then we have other customers that look at us and say, hey I need that done, you guys tell me what it’s going to cost and do that. And we have very few of those just because we are not set up to be a services company. We are working on some relationships to where we could turn and look at someone and say hey you guys … Usually what we’re doing is talking with their internal resources about how to do it. And then the problem comes in if they don’t have the internal resources that have the bandwidth to do it, who does it? And that’s still what we’re trying to figure out.
But it’s really across a range. And what very much happens, it’s a fine, like you said, visual example, what very much happens is how do we train that data, how do we see the features that determine if that’s sticking up straight or not? It’s really the features that get fed into our system. When do we need to say, hey slow that down? We’re noticing when the flow is this much, they tend to not be positioned correctly, but when the flow is this much it’s fine. So that’s the type of thing that we would start saying is, quality on the piece of paper is going down, that feature is going down, so what do we know from the past, what can we learn to say how do we adjust for that?
Brian: Got it. Wow, I love this conversation. I’ve totally learned a ton. We have to wrap up here soon, but I had a couple other questions that are more kind of state of the market kind of stuff. I came across this data science survey that was taken and I was curious if you experienced this either with Nara Logics or just you’re smelling the same thing here. Two of the challenges that data scientists were saying that they were having from a non-technical types of issues at least, data science results not used by the decision makers, which I implied to mean we did all this work and then the people that were supposed to be our internal customers don’t actually leverage it. Does that sound familiar to you as something that you’ve heard before? Can you talk about why you think that is?
Jana: I think the challenge is that it’s really easy, particularly with big data, to not believe the results. They’re hard to understand. We even face that here. We have an ongoing discussion, I’ve been here for almost five years now, about … We call, remember when I said I had to call it this but it’s the closest thing that people understand, we call our knowledge graph that we build of customer’s data, we call that a connectome because in your brain, your connectome is the wiring diagram of your neurons. So we’re inspired by biology so we call it a connectome.
And we talk on a regular basis of how do you visualize the connectome? And it’s really hard because it’s a lot of data in there. And I think that’s the challenge that people have is, they go do these big projects but how do you help people conceptualize what you’re delivering and give them the ability to question and communicate with it even? Because when they look at it, oftentimes those results, they’re not going to believe them because it goes against their intuition. And I think that’s where people run into the most problems is with that type of situation. And I think it’s more of a communication thing. You’re so deep in it as a data scientist, you know the math, and what you’re not realizing is who your customer is.
Brian: Does this tie into, well maybe you can tell me, one of the second things I wanted to ask you about from the same survey here was, and again I think this was the Kaggle Machine Learning Data Science Survey from 2017 … Another one was managing expectations about data science initiatives is difficult, which I implied to mean business stakeholders have inflated expectations around what AI and machine learning can do for example. Is it tied in at all to the expectation piece too, about what’s going to come back? Like okay here’s two million dollars in six months, go show me some magic and come back with something. Is it tied to that or is it … Because there’s an understandability, the first think you talk about is more about the understandability and the believability part versus a business person going in with the assumption that I am going to see something unbelievable come back. To me those are different a little bit.
Jana: Yeah. I love what you said there because I actually, they may say they want something unbelievable to come back, but then when something unbelievable comes back they don’t believe it. So you have this non-virtuous cycle where it doesn’t help you because they look at that and go, it doesn’t make sense. Well wait a second, you wanted something unbelievable and magic. Do you understand magic? I think that’s a challenge.
One of the things that I do recommend for people is that they actually start in a place they’re already been using statistics because as humans we’re not so great about probabilities, and at least if they started applying some statistical methods, they started thinking more about the different probabilities and impacts that they could have.
So yeah, it’s a really hard thing to do and particular when you’re a data scientist and you’re coming from this math background and people don’t understand the coolness of the math message. I do say AI isn’t magic, it’s just math. And then some people argue and say, oh well math is magic. And it’s like, yeah, well that’s part of the problem is that we present it that way and it’s actually not. It’s a complication. And it’s a really cool way to get … I mean, I’m a mathematician so I’m all for mathematicians are amazing. That said, I think we all do ourselves a disservice by saying that it’s magic. We get excited about the magic of math, but we also have to make that relatable to people and not let them feel like, oh wait a second, that’s just an illusion, it’s not real. So how do we bridge that gap and help people understand it’s not an illusion, it’s actually really real, and there’s some great value in this data that is so big it’s very difficult to conceptualize? And together we have to work in giving faith to that expression of the data.
Brian: And do you think there’s a particular skillset that’s needed to do that? So if you were suffering from that problem, if you’re working data products or you’re a data scientist or you’re in that space, what is that skillset that I would need to increase in order to have more success? Is it storytelling? What do you think that is? How do you get better at helping get the team’s head around it or your stakeholder’s head around something this big?
Jana: Yeah, storytelling is a great example and I do think that that’s part of it. I also think it’s a collaboration thing. Just like with software engineering in general, we’ve learned that having what used to be a marketer, before the days of having product managers, because I’m that old that we used to have a marketer that developed a MRD, a marketing requirement doc and basically threw it over the wall to engineers who were supposed to develop it without really understanding or collaborating.
So now we have product managers and we have designers and we have engineers, and it’s generally understood, although not well practiced to be fair, that those three people work together to develop the best product. And what I tell people is now you have a fourth person to add there to your triad. You’re adding a fourth, which is a data scientist. So I think it’s not just up to data scientists. It’s also up to product managers and engineers and design UX people to actually bring that together and build the full story of how the product, and I’m using the term loosely, product, your offering, your service, whatever it is, is actually incorporating that data, learning from that data, and continuing to improve from that data.
Something I talk about is if you look at our cycle before we had come with an idea, build out your idea in an MVP format and then test it. So you would cycle through that. Now the ideas a lot of time are coming from data, and then going up to the idea stage, then building a product around that, then learning from that data again. So you’re at a different starting point with your product in that you’re starting with data you used to have to create a product to start getting, and that’s where there’s some differences coming in and you all need to do it. It’s not up to one person, it’s really up to everyone to understand what the value is for that within whatever offering it is that you’re doing. I hope that helped.
Brian: Yeah. You’ve basically summed up that I can’t just live in the closet and do my math and hope that everyone is going to see the value of it. It’s kind of the same anytime we’re building these complex tools and services when there’s people, what I call human in the loop applications. You’re probably going to have to go engage with other humans, whether it’s customers or actually your teammates or whatever, but you need to get out of that and that’s definitely part of making sure that you’re successful, I think.
Jana: Exactly. This is not the case of just build a model and they will use it. What does that model do, how does it respond in this context, all of that adds up.
Brian: Right. Well this has been awesome. I love the conversation. Any parting words or any closing advice that you might have for business leaders trying to jump into this space or leverage some of these technologies? What they might want to watch out for? I think you’ve given us a ton of great things to think about, but wanted to give you a chance to have a closing word.
Jana: The closing word that I give is AI isn’t just about the algorithm. It’s about the data, which you’ve probably heard, but it’s also about the results you’re trying to give. And I talk about AI’s trinity being the data, the algorithm and the results you’re trying to drive. And anytime you change any of that, you have to rethink. So that’s the one thing is really understand that AI is the combination of those three things that cannot be separated.
The second thing is, AI is about learning and iterating. So you really need to set up a situation where you’re not just going to develop an AI, put it into place and never think about it again. This is really very much a how do we learn and improve? And what I’m excited about that is that I think it will turn us all into learning organizations which is a great thing. It means that we’re going to continue to grow and figure out new things and new paths as we get more data in. But that’s a transition because most of us aren’t. Most of us are in a fixed mindset, rather than a growth mindset if you go by that terminology. I typically go by Peter Senge’s learning organization terminology, which I like. That’s where I think we’re going and you have that opportunity. But you have to learn how to do that.
Brian: Wow. Well thank you-
Jana: I know, that’s like a whole other podcast. Sorry.
Brian: I’m re-reading, I forget, was it Lean UX? No, I can’t remember the process. It’s a well known book. I feel silly now. But the point being, are we learning when we either roll out features or we want to put something into our product or service and measuring whether or not you’re learning something as opposed to just did the metrics go up into the right.
Jana: Sorry, I know they cover that in Lean Analytics. And they probably do in Lean UX too.
Brian: Yeah, I forget the book. Anyhow, it doesn’t matter. I think we’re on the same page about whoever wrote the book about these things. The wisdom is sound regardless.
Well Jana, thank you for coming on the show. This has been great. This has been Jana Eggers, CEO of Nara Logics. I’ll put your link to your site in there. But I know you’re on Twitter. Is there anywhere else? What’s your Twitter handle? I’ll put that in the show notes.
Jana: I’m @jeggers, J-E-G-G-E-R-S.
Brian: Awesome. And any other place people might want to learn more about what you’re doing or what your company is doing?
Jana: No, those are the main ones.
Brian: Okay. Cool. Well I will drop those in the show notes so people can learn more about you guys and what you’re doing. Thanks for coming on Experiencing Data. This has been a great conversation.
Jana: Brian, it was so much fun. Great questions. I look forward and I hope we get some feedback.
Brian: Cool. Me too. All right, well cheers.
In part one of an excellent series on AI product management, LinkedIn Research Scientist Peter Skomoroch and O’Reilly VP of Content Strategy Mike Loukides...
Ganes Kesari is the co-founder and head of analytics and AI labs at Gramener, a software company that helps organizations tell more effective stories...
Cennydd Bowles is a London-based digital product designer and futurist, with almost two decades of consulting experience working with some of the largest and...