037 - A VC Perspective on AI and Building New Businesses Using Machine Intelligence featuring Rob May of PJC

April 21, 2020 00:48:57
037 - A VC Perspective on AI and Building New Businesses Using Machine Intelligence featuring Rob May of PJC
Experiencing Data with Brian T. O'Neill
037 - A VC Perspective on AI and Building New Businesses Using Machine Intelligence featuring Rob May of PJC

Apr 21 2020 | 00:48:57

/

Show Notes

Rob May is a general partner at PJC, a leading venture capital firm. He was previously CEO of Talla, a platform for AI and automation, as well as co-founder and CEO of Backupify. Rob is an angel investor who has invested in numerous companies, and author of InsideAI which is said to be one of the most widely-read AI newsletters on the planet.

In this episode, Rob and I discuss AI from a VC perspective. We look into the current state of AI, service as a software, and what Rob looks for in his startup investments and portfolio companies. We also investigate why so many companies are struggling to push their AI projects forward to completion, and how this can be improved. Finally, we outline some important things that founders can do to make products based on machine intelligence (machine learning) attractive to investors.

In our chat, we covered:

Resources and Links:

Email [email protected]

PJC

Talla

SmartBid

The PAC Framework for Deploying AI

Twitter: @robmay 

Sign up for Rob’s Newsletter

Quotes from Today’s Episode

“[Service as a software] is a logical extension of software eating the world. Software eats industry after industry, and now it’s eating industries using machine learning that are primarily human labor focused.” — Rob

“It doesn’t have to be all digital. You could also think about it in terms of restaurant automation, and some of those things where if you keep the interface the same to the customer—the service you’re providing—you strip it out, and everything behind that, if it’s digital it’s an algorithm and if it’s physical, then you use a robot.” — Rob, on service as a software.

“[When designing for] AI you really want to find some way to convey to the user that the tool is getting smarter and learning.”— Rob

“There’s a gap right now between the business use cases of AI and the places it’s getting adopted in organizations,” — Rob

“The reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute.” — Rob

“If you are changing things and your business is changing, which is most businesses these days, then it’s going to help to have models around that can learn and grow and adapt. I think as we get better with different data types—not just text and images, but more and more types of data types—I think every business is going to deploy AI at some stage.” — Rob

“The general sense I get is that overall, putting these models and AI solutions is pretty difficult still.” — Brian

“They’re not looking at what’s the actual best use of AI for their business, [and thinking] ‘Where could you really apply to have the most economic impact?’ There aren’t a lot of people that have thought about it that way.” — Rob, on how AI is being misapplied in the enterprise.

“You have to focus on the outcome, not just the output.” — Brian

“We need more heuristics for how, as a product manager, you think of AI and building it into products.” — Rob

“When the internet came about, it impacted almost every business in some way, shape, or form.[…]he reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute.” — Rob

“Some biases and stereotypes are true, and so what happens if the AI uncovers one that we’re really uncomfortable with?” — Rob

Transcript

Brian: Welcome back, everyone. This is Brian O’Neill, and this is Experiencing Data. Today, I’ve got Rob May on the phone. You’re still there, right?

Rob: Yes, thanks for having me.

Brian: Yeah, I can’t wait to chat with you about the VC side of AI products. Just for folks that don’t know, I learned about Rob through a great email newsletter that he puts out. I think it’s about weekly. If you’re interested in news about what’s going on with artificial intelligence technology, it’s a really great quick primer on highlights in the field, so definitely check that out.
Correct me if I’m wrong, but you’re a somewhat new general partner at PJC, which is a venture company in Boston, correct?

Rob: Yes, that’s correct. I’ve been here as a general partner since November, but I’ve been investing for about five years. I made 74 angel investments over the previous five years before joining PJC.

Brian: Right, right. You’re specializing in kind of AI as kind of a focal area, is that correct?

Rob: I am. I focus here on what we call machine intelligence. Related fields would be robotics and neurotechnology.

Brian: Got it. Obviously, there’s a lot of hype in the AI space, but I think what might be really interesting to some of our listeners is that there are digital-native companies and startups that are pitching people like you for funding to see their vision come true. And then you have people working at legacy, non-digital native companies that are also trying to turn AI into value internally. And so I thought it might be interesting for people more in that category to hear what kinds of stuff are coming to you and how you’re perceiving that from a business standpoint. Then also how user experience ties into making sure that these types of artificial intelligence products and stuff actually still come back to being a viable product in the market. Whether it’s AI or not, there has to be some type of value being produced. And so that’s kind of what I wanted to talk about today. But first, what is the craziest AI pitch you’ve gotten recently since you came on board?

Rob: Rob: Oh, man, there’s a lot of them. I think maybe the craziest, and this is one that’s crazy in a good way that we actually thought about doing, was a company that’s looking to build a sort of trust layer over the internet using AI that’s going to use machine learning models to determine what’s a deep fake and what’s fake news, and sort of all this other kind of stuff. It was very aggressive and very technically challenging. I’m not even sure you can actually do it, but a very ambitious project.

Brian: Wow, so it would follow me and improve my experience and the safety of my data as I cruise the web?

Rob: I think the way they were thinking about it more is that it’s something that content providers would use and plug into via API so that they could see what other content was related to their stuff, are people misposting their stuff, things like that. Think about it almost like a fact-checking, machine-learning driven kind of API for content.

Brian: Got it. Cool. One of the things you have been talking about in your newsletter, which I thought was really interesting, is services as a software. Just to be clear, I’m not talking about SaaS, or software as a service, but it’s kind of reversed. Can you give us a quick couple sentences on what that is and why you’re seeing this as a repeatable model?

Rob: I made an angel investment a couple of years ago in a company called Botkeeper. Botkeeper took accounting and made it scalable. They did that by taking a bunch of accountants, and if you have an all-digital infrastructure—if you have NetSuite and QuickBooks Online, Expensify.com, or Bill.com—then they’re trying to replicate the experience of having a remote bookkeeper. So you can imagine a human doing this. You email them receipts, they ask you how to classify transactions, and they can do financial statements, and they ask you questions. If you have that kind of experience, then what Botkeeper does is they basically say, “Look, we are more accurate than a normal bookkeeper, and we’re half the price.” The way they do that is by constantly automating more and more of the bookkeeping tasks by watching what the accountants do that work for them. They do employ some bookkeepers. And then building machine learning models. When you think about automation in a company, one of the reasons that we haven’t automated more work is because there’s lots of things that we can automate, but we don’t have a data set to automate. Botkeeper, by collecting this data set and doing the automation, can now scale a services business like accounting that used to rely just on humans in ways that you could scale a software business previously. What attracted VCs so much to software businesses was their scalability and their gross margins. Then software as a service came about because it’s a new sort of deployment and operational model where it all ran in the cloud, and you just needed a browser to set it up. That was easier, and so we sort of flipped that on its head and said, “Hey, this is service as a software.” It’s a logical extension of software eating the world. Software eats industry after industry, and now it’s eating industries using machine learning that are primarily human labor focused. It’s one of my favorite models to invest in, and It doesn’t have to be all digital. You could also think about it in terms of restaurant automation, and some of those things where if you keep the interface the same to the customer—the service you’re providing—you strip it out, and everything behind that, if it’s digital it’s an algorithm and if it’s physical, then you use a robot. I give the example in the post I wrote about a haircut robot which, I should be clear, I just use that as an example because it’s actually not a great fit for where you’d want to automate. I don’t think you get all the benefits, but you can imagine if much of your haircut process was the same, and then you had this robot do a big part of it. Or you could think about a kitchen. You’ve seen some of these restaurants where maybe you still have a waiter or waitress, but all the food is prepared by a machine, and that improves the gross margins and makes it more scalable. Yeah, a very interesting business model.

Brian: Got it. It seems to me that this could be a tricky strategy to get right because—and correct me if I’m wrong here—but it would take a lot of knowledge about the domain of accounting as opposed to just looking at the data because there was obviously some training that went on in order to learn about how accountants make decisions. But I’m curious because we all know accountants can get very creative with how they record numbers and your tax liability and all of that, and I’m curious how you capture all of that. I guess what I’m saying is I could see someone just looking at a data set that reflected “here were decisions that were recorded by accountants,” but it would lack the thinking that went into those. So at best, you’d have to impute or imply why they arrived at these decisions, and it feels like it would miss something, definitely some efficiencies like for really repeated, manual stuff. But it seems like it might miss some things unless you really understand accounting. I don’t know.

Rob: Yeah, and so the question is, how high can you go up the accounting work stack? So think about the bottom of the work stack, like classifying a transaction. For example, I got a receipt. Where does the receipt go, and what do I classify it as? Well, if you had to deal with finance in any kind of role, you know that this happens a lot in big companies and small companies where they’re like, you submitted a receipt from a restaurant for lunch. Was that a marketing expense or sales expense? Were you with a potential customer? Was that an expense where you took out an employee for lunch and talked about something? Then how do you classify that? Were you meeting with an investor? There’s lots of different ways to classify that single expense. The same thing with let’s say that you get a bill from Amazon. You’re a bookkeeper, and you’re like, “Okay, well, this is office supplies. This is office supplies.” Now, you get a bill from Amazon.com, where somebody’s using AWS, and they’re using servers for the first time. Well, your natural inclination as a bookkeeper is like, “Oh, that’s office supplies because when I see Amazon bills, it’s office supplies.” And you would be incorrect in how you categorize that, but that kind of thing happens all the time. It turns out human bookkeepers are actually about 89% accurate. So the more experiences you have, the more accurate you’ll be, the more things you can see. But as a bookkeeper, it’s hard to have a lot of experiences. As a machine, you can have hundreds of thousands of companies that you do the books for, and so you can build a giant machine learning model that knows how to categorize all this stuff because it’s seen variants of it. So to your first point about this, what’s hard about this business model is two things. Number one, finding use cases where you do have enough data to create a model, and it’s trainable meaning it’s not super complex, high-level, cognitive thinking. And then the second thing is figuring out how to adjust your workflows to collect that data set because the service as a software business model is not one like we’ve seen previously where you have data sets laying around. Think about a machine learning model to detect breast cancer. Well, you have all these mammogram images laying around, so you take them and train a model on them. These tend to be business models where people are performing a task, and you think, “Well, I don’t have a data set of what these people are doing, but I could get one by watching them do it for a while.” And so by recording what they do, you can start to build the data set, and then you can use that to train your model. And so if you take Botkeeper as an example, what they do is they constantly train on the next task and the next task. So the business model that serves as a software model starts looking mostly like a services model with services margins. But then you automate 3% of the tasks, and then another 4%, and then 7%. You sort of eat into the amount of labor that humans have to do and improve the gross margins. But you might keep the interface to the user, whatever that is, to the customer. You might keep that the same.

Brian: Got it. And do you see that almost becoming a product for accountants where it’s eliminating non-strategic thinking work, and it’s automating that, as you talked about, work stack? So, different tasks and activities happen on a daily basis, but there’s probably ten of those receipt-tracking activities for every one time when you start thinking, “Okay, how do I write down this expense?” Things that you really want your accountant working on for bottom line. Is that how you see it, it’s more of an AI-assisting the accountants, or is it really like a complete removal of the human accountant from the equation?

Rob: I think it’s both. I think a lot of the grunt work of reporting these kinds of business details goes away. Let me give you another example, which is for customer support automation. My last company, Talla, was in this space, and the idea is when a ticket comes in— let’s say you have 60 people on your support team—there are a fair number of tickets that come in that are new and novel problems, or maybe you launch a new product, and here’s a novel support issue, but a lot of them are the same issues that come up time and time again. Maybe you, as a support rep, haven’t seen an issue yet or maybe you saw it a long time ago, and you don’t remember how to find it, but you should be able to teach a machine that if I solve an issue for one customer one time, the machine should be able to repeat the solution or some variant of the solution for the customer going forward. At Talla, some of our customers would save $1 million, $2 million, $3 million a year in support costs because they could suddenly do 20% more tickets in a month.

Brian: Got it. You also said in one of your recent newsletters that “AI is everywhere. Everybody will need it.” That’s a pretty big group of companies and people, especially if we think outside of [the fact that] we’re both here in the Boston area, and there’s lots of tech around us and stuff. Not having grown up here, I remember Boston, to me, is the exception as opposed to the rule. I’m curious, how much do you believe that AI is everywhere and everyone will need it? Why does every company need this particular tool?

Rob: Well, I assume by a particular tool you mean AI because it’s sort of a ubiquitous technology like the internet. When the internet came about, it impacted almost every business in some way, shape, or form. Even if all it did was make it easy for you to reach your customers or whatever, that’s still pretty different. But the reason that AI’s so interesting is because what you effectively have now is software models that don’t just execute a task, but they can learn from that execution process and change how they execute. Unless you’re in a static business where you’re like, “Yeah, we bought a software package 15 years ago, and we haven’t had to change a thing about it. No new configuration, no new updates, nothing’s changed,” then you’re not going to need it. But if you are changing things and your business is changing, which is most businesses these days, then it’s going to help to have models around that can learn and grow and adapt. I think as we get better with different data types—not just text and images, but more and more types of data types—I think every business is going to deploy AI at some stage of their business.

Brian: We’re recording this in January 2020, but the general sense I get is that overall, putting these models and AI solutions [together] is pretty difficult still. The surveys on this show that the success rate for deploying models into productions, at least internal companies, is quite low. And I’m curious. You worked in backup, right? So it’s like infrastructure. It’s supposed to just be like oxygen. You don’t think about it. It’s just there. What’s that gap like, and what’s the journey going to look like to get to that point where it’s just like oxygen?

Rob: Well, I think you’re going to have a good 10 to 15 year period, much like you did for the early internet. And here’s why. Because people misapply AI. If I look at where companies are applying AI right now, it tends to be where you have the most progressive executive in the company. You have somebody who’s like, “Oh, we should be forethinking. We should do AI, and how can we apply it to this?” They’re not looking at what’s the actual best use of AI for their business, [and thinking] “Where could you really apply to have the most economic impact?” There aren’t a lot of people that have thought about it that way. I wrote a blog post a couple years ago based on a talk that I gave at the Launch Scale conference, and it’s called the PAC Framework for Deploying AI. That stands for predict, automate, and classify, which are three things AI can do really well. And I encourage people in that post to look across your customer set, your business operation, and your core products and make a box with those three things on one side. On the other side of the box or the other horizontal axis, you’re going to put, predict, automate, classify. Then fill in each box. Where could you predict things for your customers? Where could you automate things for customers? Where could you classify things about your core product that might be beneficial? You won’t have a thing in every box, but that’s how you can kind of think through what are the applications of AI to my business. Then, when you have those things in your boxes, you start to think about, “Well, which ones will have the most strategic impact, and do I have the data set to be able to do them?” Some of the ideas you’ll have are like, “Wow, it would be really awesome if we could automate this piece of our business operations.” That will be an idea in one of your boxes, but you won’t be able to do it because you won’t have a data set to train a machine learning model on. So you’ll have to either work to get that data set, or just wait until somebody else figures that problem out. So it’s just one of those big projects where we’re going to have to record a lot more data about the world so that we can do more AI on the world and in every business and everything. That’s going to take a decade or more.

Brian: Yeah, I like the framing of that framework, and I wonder, tell me what you think, if part of the challenge is also understanding we have this idea that’s not binary, it’s not do we have the data or not to build a model for this, it’s we have some of the data or this data’s partially complete. Then you get into this thing where it’s like is it worth building a model that can kind of predict a little bit of stuff? Then you got this issue of does an employee trust the value that it says, “Set the dial to this for the factory floor,” whatever the heck it is? It becomes this very gray area of is there enough data and insight here to actually matter here? Because you can get the technical modeling part done right, but I feel like sometimes there’s still this disconnect, which is how do we tie this to business value, and is this small project enough? We need to know everything about our customers before we can actually do this. Do you see this as binary, or…

Rob: Yeah, no it’s definitely a continuum, right? There’s sort of two points that I’ll make on that. One is that one of the problems people have with deploying AI is that these models are probabilistic. Software has been binary in terms of its outputs. It’s been this or that, and now you have these models where it’s like if I’m going to train a model to say is this a cat or a dog in a picture, that model might be right 88% of the time. Is that 88% good enough? Well, if you’re having humans review it, it’s probably because it makes it much faster than humans having to look at every picture and categorize it themselves. Now, the humans can just click, “Yes, yes, yes. It’s a dog. Dog, dog, dog. Oh, there’s one I have to fix.” Let me give you a concrete example. I’m an investor in a company called SmartBid, and what SmartBid does is they look at imagery from construction sites. There are many things that they can determine from these images, but people take pictures of things like, let’s say you’re going to put some rebar inside and then pour concrete over it. Well, how do you know how much rebar was in there and if you did it according to spec? Well, you take a picture of it before you pour the concrete. And so there are a bunch of construction site images, and people may use those to validate the construction plan, look for equipment that may be sitting and unused, look for safety violations. They have humans who sit there and look at all these construction images to figure out is there anything important from this image? If you can create a model to do that, a model doesn’t have to 100%. If the model can just narrow it down to, “Don’t look at these 10,000 images. Look at these 400.” It saves you a lot of time. And yeah, some of the 400 might not be right, but it’s good enough. Just one application, or one problem, I should say, is that this probabilistic output, that the model isn’t 100% accurate. It uses human [inaudible], and we haven’t learned to work with as well as we should have. The second thing to your point on data is, fascinatingly, an interesting new industry around synthetic data that’s springing up to fix this problem. I’ll give you an example of a synthetic data use case. Let’s say you have the feature where you unlock your phone with your face, and I’m the company that makes that software piece. I’m getting bug reports that my model doesn’t work very well. It doesn’t unlock your phone when you have glasses on. It doesn’t unlock your phone when you’re in a bar and the lighting is low. Well, what are my options? I can go improve my model by getting another 10,000 pictures of people with glasses or 10,000 pictures of people in dark lighting in a ba. Or if I have pictures of you, I can probably Photoshop glasses on you and then stick that in the model. I can probably change the lighting in Photoshop on your face and stick that in the model, and train the model on that. So you can use the synthetic data in some cases to build out and improve your model. Now, it won’t work in everything. If you’re trying to generate natural language sentences using synthetic data, that doesn’t work very well, but there’s some use cases where synthetic data can work very well with given one data point, you know how to create a bunch of data points around it that are good for your model. So my belief is that every technology stack is going to have a workflow built into it in the future where when you get a report that your model’s not performing as expected, you’ll be able to easily generate synthetic data to account for that discrepancy and improve the performance of your model.

Brian: So there’s the accuracy of the model, right, but I guess one of the things that I feel like I hear quite frequently, to use your analogy of the cement and the rebar, is starting with photos of cement being poured over wall frames with rebar, and the technical team saying, “Does anyone care about pictures of cement? I’m not sure, but we can make a prediction about whether it was poured right.” Then, there’s a business person on the other end scratching their head, thinking, “Okay, what would I use this for?” So you have models being built that way where it’s like it doesn’t matter it’s 92% accurate because someone still doesn’t know why they need that. Is cement the right thing to care about first?

Rob: Yeah. No, that’s a great point, which is there’s a gap right now between the business use cases of AI and the places it’s getting adopted in organizations, getting adopted where somebody thinks it’s cool or when there’s a thing they know they can do. To your point about concrete, somebody will say, “We have a lot of data on this thing. Let’s just make a machine learning model on that.” You’re like, “Well, is that model useful?” What they should really be doing is classify all the things for like, “Hey, if we can predict better here, if we can automate these things, that would be really useful,” and then go through and figure out if you have the data for any of them. If you don’t, is it easy data to get? Could you run a process for three or four or five months to collect enough data to build a good model? This lack of buy-in is part of the problem, which is that so many AI projects turn out to be stupid and useless. And then, if you’re old school and you don’t believe in AI, or you don’t understand it, then once you see a project that’s stupid, then you go, “Well, we tried AI, and it was useless, so we’re not going to do it.”

Brian: Right. Yeah, you can’t throw the baby out with the bathwater, but I understand that. I’m curious, do you have a fair amount of pretty lousy stuff coming in the door to take a look at, or would you say a lot of them are fairly robust in terms of a business and technology fit?

Rob: Well, the biggest problem I think you have is you just have a lot of companies that are features of something bigger. Somebody will say, “Oh, there’s some software tool. We’re going to build an AI version of that.” If you’re going to take classic startup theory of “do one thing and do it well,” most of the things that you’re going to do are not going to be full company. Let’s say you’re going to do AI CRM. Well, where in the CRM are you going to apply AI? If you do it everywhere, that’s a huge undertaking. You’re going to have to build multiple models. You’re going to have to build a model about each task that the software can do. It’s like, “Okay, we’re going to start with a very stripped-down thing. We’re going to do this one thing that CRM doesn’t do well, and we’re going to do that with machine learning. It’s like, well, your likely exit is to just get acquired into a CRM company, but not that attractive from a venture perspective. I think it’s really hard to find those opportunities that are good for AI as places to start. I see a lot of opportunities where people are building things that the big companies are going to build into their products eventually, and that’s not where you want to start with AI.

Brian: Got it. Yeah, I think the question of “so that” at the end of sentences is really useful here. When someone says, we’re going to do an AI version of CRM, and I would always say, “so that…? What?” You have to focus on the outcome, not just the output. The thing you want to make. It’s supposed to enable some outcome. We talk about that a lot on this show. It doesn’t matter if you built the CRM thingy because no one knows what it’s for.

Rob: Right. Just off the top of my head, there’s two big problems that I can see in CRM. One is, is the predictive value in your pipeline very low? Are you going to use AI to predict deals that are better and more likely to close? Actually, part of the reason it’s low is because your main problem with CRM, which is data entry. People don’t enter data. It’s hard to do. It’s boring. Salespeople don’t want to do it. They want to sell. But there’s a perfect example. If you’re going to say, “Well, I’m going to build an AI-powered CRM that makes data entry much easier,” because it uses a bunch of AI to maybe autoformat the data for you or guess what the data is after the phone call, so you don’t have to enter it all and we just correct where the CRM goes wrong, I think that’s tough as a standalone product. Because you still have to build all the other stuff that the CRM does. You have to be an add-on to CRM. What you really have to do is you pick a market segment where you could probably go with a custom CRM where data entry is just hugely painful, even bigger than most sales organizations, and maybe start there. Maybe that would give you a roadmap to build a more generalized CRM.

Brian: Yeah. It’s interesting you say it that way because, for companies and data strategists and people working as an employee internally at a company, that sounds like it could be a very much the right way to approach a small, incremental value of AI that’s not trying to boil the ocean. It’s simply “Fix that one piece of the CRM. Make the data entry process a little bit easier for your team so they can spend more time selling.” I don’t know. What do you think? It sounds like a good strategy for if that’s not your whole business.

Rob: Yeah, definitely. A lot of the reason that AI is going to be won by big companies in lots of markets is because you need the data, and you need the users to train the models, and you need the usage. It’s hard to bootstrap that. And so you’re going to see a lot of use cases where you just can’t compete against the big guys because, by the time you get any kind of traction, they’re going to realize it’s valuable, and they’re just going to turn on their data sets, and they’re going to do so much better and crush you, and they’re going to have the distribution and everything else. You have to really, really be selective in your use cases if you’re going to start an AI company.

Brian: Have you seen examples in your pitches or some of your portfolio investments where getting the design piece right or the user experience piece right was important for conveying the value to the customer? I’m not just talking about marketing, but rather really aligning the experience with the work or the tasks of the person that’s actually sitting down to use the system. Have you had any examples of that?

Rob: Yeah, so one of the things that we did at Talla when I was still there was we put a metric in that showed the efficiencies gained from the AI. We would track maybe the number of tickets that somebody had coming in, and kind of show you how many tickets were automatically answered by Talla, and what would those have cost if you had a human do it. I think you’re starting to see a lot of workflows that have suggestions in them, pre-population of data, or something like Grammarly, where it’s probably a good example where you understand the value because the AI corrects you and you can correct the AI back and forth. The model can learn, so I think those are really, really interesting. Then, the other interesting thing about AI from a design perspective is it’s influencing the entire technology stack from hardware to infrastructure to middleware and application layer stuff. Everybody at every layer is having to rethink that user experience. Like, I’ve done three AI chip investments, and the reason I’ve done those is because AI workloads don’t work well on CPUs. They need different kinds of processing, so that’s opened up this market for new kinds of chips. But the user experience around how to use that chip is very different than what you’re used to today. We’ve been on these same chip architectures for 40, 50 years. Those chip people have to think about … Their user interface is how do you program that chip? How do you compile stuff into it? How do you integrate it into other electronics? I see these huge design problems are all up and down the stack. It’s really, really a great time, I think, to work in the space. There’s a lot of really cool problems to solve.

Brian: Yeah, I think it’s interesting that you bring that up because I think sometimes people focus on the graphical user interface as their only perception, at least data people sometimes, that it’s data vis or whatever. And it’s like, “No, no, no. It can be actually like designing the API to interact with the chip.” The result might be some documentation of an API, but you’ve actually gone out and figured out what do the developers want to interact with, how do they need that model in their mental model of how it should work, particularly maybe if they’re coming from an old model of how it should be and you’re introducing a new one. There’s that current knowledge and future knowledge gap, and you need to close that. To me, these are all facets of human-centered design and putting it into play, whether or not there’s an explicit GUI or something sitting on top.

Rob: Yeah. Well, one of the interesting things about designing for AI is you really want to find some way to convey to the user that the tool is getting smarter and learning because that’s very important. One of the challenges that we talk about is whether or not it’s worth it sometimes to even engage in a little bit of what we call AI theater. Anytime you design a product, you have these features sometimes that people think they want that you, having built the product, designed the product, and seen thousands of people use it, you’re like, “I know you think you want that feature. I’m just going to tell you, you’re not going to use it. Nobody does. It sounds cool, whatever.” In order to make a sale, sometimes you have to build those features, and they’re more marketing features than anything else. And so Is there an equivalent in the AI space? It’s sort of AI theater, which is like, “Okay, so I’m going to make this really fancy, wow feature that’s not that useful, but impresses people in a demo just to make it seem like the tool is smart.”

Brian: Got it. Yeah. That’s interesting. There’s a great slide going around. You probably saw it on LinkedIn, and it was something like, “If you saw it in a slide deck, it’s probably AI. If you saw it in developer documentation, it was probably Python.” Kind of talking about how everyone’s trying to plop the AI into their marketing language, but ultimately, the time will come where it’s like, “That’s nice. What do I get for paying you 10 times as much for your AI-based solution?” It comes back to value regardless of the hammer that you’re going to hit the solution with.
This whole space around AI, and everyone’s trying to move quickly and show value soon, what’s the challenge in terms of making sure the work that we’re doing is ethical? There’s all kinds of stuff going on, obviously, with people’s private data and real concerns in this place. The U.S. government put out its, I think, the first kind of stack of rough guidelines there, some of which have been punched already a little bit. Can you talk to me about balancing the innovation side with the privacy issues and the ethical issues, so maybe we don’t have a repeat of what’s kind of gone on with social media, for example, for the last 10 years? We don’t really want to run into whatever the 10-year out version of AI looks like on the negative side.

Rob: Yeah, you’ve got a bunch of interesting problems. One is that, if you take the AI bias problem in and of itself, that’s a tough one for companies to address. Let me give you two competing examples. There have been these studies that show, for example, when you’re looking at what criminals get paroled, there are things that show right after lunch when the parole board is not hungry and maybe is thinking more clearly and their blood sugar’s up or whatever, they’re more likely to be in a good mood and give people parole than right before lunch. I think that’s a study somewhere. I don’t exactly remember. Let’s say you train a machine learning model on that. The machine learning model now picks up on this weird bias in there, and it points you to something else because you can always find correlation. Now, you’ve perpetuated this bias all through the system. That’s the thing where you are like, “Well, okay. We need to fix this because you have certain demographics or whatever are really suffering, and it’s not their fault.” Or, who trains the data? If you have a bunch of men versus women train your data sets, then, yeah, you might get different outputs, and you might perpetuate the biases that one sex or the other has through those, but there are other issues where you have bias possibly, but should you try to fix it? So an example is like, there was a big stink made about this natural language processing model that was like, “Well, it’s much more likely to associate the word nurse with woman than it is a man.” Well, if you look at the labor force statistics, 91% of nurses are women, and 9% are men. So if you want your model to be accurate, then you place this question of, “Okay, do we want to train the model that 91% of the time, most of the time, it should equate a nurse with a woman because that’s what the labor market looks like, or do we need to make it 50/50 or gender-neutral somehow because maybe that help more men be nurses?” I don’t know. It’s the bias issues like that that are really complicated. Some biases and stereotypes are true, and so what happens if the AI uncovers one that we’re really uncomfortable with? We haven’t learned as a society that just because a stereotype is true of a population, it doesn’t mean, you never know for any given individual if that stereotype…You can still treat people differently as individuals. So I think it’s going to take a lot of education to sort of work through these issues with people. Then that’s just one bias issue. You have ethical issues about control. You have ethical issues about autonomy and who’s liable. You have safety issues about whether AI gets powerful. AI’s going to run more and more work processes. Are you going to let it run chemical processes, and what if it messes up and it blows up a plant? Are you going to let it run nuclear centers? There’s just tons and tons of issues that are coming that we’re not prepared to deal with as society.

Brian: Yeah, I agree with that. I think part of it’s also the conscious business decision to use, for example, your nurse data set. I think part of it is you get into, as you called, the continuum. The gray area where it’s a business decision to decide whether or not you’re going to use that data set when you know that it’s going to not call most men nurses and most women are. To me, that’s where part of the issue is here, and I think diversity in teams becomes really important. It’s something I advocate with clients that are working in this space is trying to get some outside perspectives on your work because your own world view is just your norm, and it’s really hard to step outside of that unless you’re a researcher and you’re constantly doing that. But you need to get those outside opinions on it, and ask yourself, do you want to see yourself in the news if this decision got wrong? Do you want to see yourself, as what Cennydd Bowles talks about, the news test? How would you feel if your family saw this written up in the Times?

Rob: Yeah, and you have to be super thoughtful because sometimes, you could still build a racist machine learning model even if you take race out of it because names tend to cluster with racial identity to some extent. You might find that a model keys in on a Latin last name or an Asian last name or something, or an Islamic last name, and even though you don’t have race in there, it still clusters in a certain way. So you have to be really, really careful.

Brian: Yeah, I think even when I think about full experiences, as you were talking about the whole work stack, it’s even the raters. If you’ve got human people that are rating your data for training sets, even the simple questions like “is this a car or not,” and you see a picture of a Jeep Grand Cherokee, and it’s like, “Well, I guess it’s a car. It’s an SUV, but that’s not a choice, so I guess it’s a car.” It’s like they don’t mean anything wrong with that, but right there, because of the way the question was posed, you’re potentially training a whole new set of data there. So I think part of this is you need to be interfacing with your raters as well and know when was the last time you ran into a situation where you weren’t sure, and you rated something like, “What was that friction, or tell me about it some times,” so that you can adjust even at that stage. There’s so many places here to get right and to get wrong.

Rob: Yeah, and that’s the perfect example that you just gave where you need a business process so somebody can flag that. You can figure out, “Okay, do we want to label Jeeps as cars, or do we need another category for this model?” A lot of times, it’s like survey design. Sometimes you set up your survey, and it doesn’t tell you what you thought because it was poorly designed. I think a lot of it’s training and classification. I think this happens a lot in machine learning models.

Brian: Yeah. I know we’ve only got a few minutes left here. I wanted to ask you about, as a product designer, and I’ve worked in a ton of tech companies and clients in this space, and the product management engineering and product design trio is really powerful. If it was a rock band, that’s who would be in it when you’re building out products. Even Gartner recently said their chief data officer version four, the kind of model of that, was shifting from projects to product. The irony there to me is that the role of product management for data products is entirely missing in that space, and I feel like that’s part of the reason why large companies that aren’t tech companies aren’t able to do this well because there’s no one sitting at the helm of whose job is it to make sure that this AI or the model or the data product actually delivers some business value. Do you feel like that’s a gap as well? Maybe you don’t see that as much because of the work that you’re doing, but I’m just curious if you have an opinion on that.

Rob: No, I do actually because I think it’s odd, but formal product management has lagged a lot of trends. It was slow to agile sometimes outside of tech companies and everything else. It’s only in the last few years that you’ve really been able to start to get more formal training on it. There aren’t college degree programs anywhere that I’m aware of. If there are, there aren’t very many. The training for it is really, really, weak in a lot of ways. Of course, that’s going to apply over to AI. Someone at Zetta Ventures—I think it was Jocelyn—wrote a great post a couple years ago on what they called model market fit, the idea of product-market fit. You have to think about your AI model and, does it do what you need it to do for your customer, and does it do it well enough to make a difference? If your model’s not that good, then what’s the point?

Brian: Absolutely. Yeah, I think this is a place that there’s a lot of room for improvement, ensuring that someone has some bottom-line accountability for that. What I keep hearing is that it’s like a tennis game. The data people think it’s the frontline business manager’s job to do that, and the business manager’s like, “You guys are the data science people. Tell us what’s possible. It’s like, toss the ball back and forth. Who’s going to actually find a good problem to go out and solve? The more I’m hearing is that the expectation is growing a lot more on the data technology people, especially in leadership, to take that role of thinking product. I think this Gartner CDO version four kind of speaks to that, of the importance of “You need to understand your business. You need to understand the human factors involved and who’s going to use it. How are they going to use it? What do they need it for?” And not just like, “Tell us what you need built, and we’ll nail it.”

Rob: Yeah. We need more heuristics for how, as a product manager, you think of AI and building it into products. I can give you one simple one that you can use, which is, if you find yourself in a situation where you have a lot of data about something, and you have a lot of humans that have to look at that data and do something with it, and you think, “It’s relatively scripted. I only need humans to do it because it would take so long to write out all the rules. But there are rules.” Then that’s usually the kind of situation where it’s really good to put a machine learning model. A machine can learn the rules from looking at the data. That’s the kind of thing I would tell a product manager that, “Hey, when you’re looking at products, and you find yourself in that situation, look to machine learning to solve some of it.” This is probably a blog post I should write somewhere along the line. It should be like, “Here’s a six heuristic that an AI product manager should have.”

Brian: Yeah, I think that would be useful because I think at least there’s a hunt for good problems. I think the fact that that’s even understood, that it’s not just a hunt for the right tech. I think there is a genuine hunt for good use cases, and people need help with that. So I think that would be a great article. I would look forward to it. I know we got to wrap up soon, but I was just curious. Do you have any closing thoughts for people, data strategists, or people working in this AI data science field, on ensuring that they deliver good experience and good value with AI today, closing thoughts?

Rob: I would just say two things. Number one, don’t pay attention to the AI news because it’s mostly research-y, and it’s mostly hype. Most of the real, interesting applied stuff gets buried and doesn’t get enough news. Make sure you’re paying attention to the right stuff. Number two, this is an area where you really need a back and forth with the customer. You’re giving the customer some ways to do some new things, like changing some of their behaviors. So a big part of designing a product that includes an AI is that, for example, if you’re going from a binary output to a probabilistic output, does your user understand that, and are they okay with that? And will they work with that, or are you just going to make them more confused? You have to constantly, I think, make those tradeoffs. It’s pretty important.

Brian: Awesome. Well, it’s been great to talk to you. This has been Rob May. Tell us, Rob, where people can look you up, follow your work, all those good things.

Rob: Yeah, so if you want to email me, I’m just [email protected], the venture capital firm where I’m a partner. You can also follow me on Twitter @robmay. Then, if you want to sign up for my AI newsletter that Brian referenced in the beginning, it’s just inside.com/AI.

Brian: Awesome. Well, I will definitely put those in the show links, and thanks for coming on Experiencing Data. It’s been really great to talk to you.

Rob: Yeah, thanks for having me.

Brian: All right. Cheers.

Other Episodes

Episode 0

January 28, 2020 00:41:28
Episode Cover

031 - How Design Helps Enable Repeatable Value on AI, ML, and Analytics Projects with Ganes Kesari

Ganes Kesari is the co-founder and head of analytics and AI labs at Gramener, a software company that helps organizations tell more effective stories...

Listen

Episode 0

May 14, 2024 00:50:02
Episode Cover

143 - The (5) Top Reasons AI/ML and Analytics SAAS Product Leaders Come to Me For UI/UX Design Help

Welcome back! In today’s solo episode, I share the top five struggles that enterprise SAAS leaders have in the analytics/insight/decision support space that most...

Listen

Episode 0

May 21, 2019 00:44:35
Episode Cover

013 - Paul Mattal (Dir. of Network Systems, Akamai) on designing decision support tools and analytics services for the largest CDN on the web

Paul Mattal is the Director of Network Systems at Akamai, one of the largest content delivery networks in the U.S. Akamai is a major...

Listen