040 - Improving Potato Chips and Space Travel: NASA’s Steve Rader on Open Innovation

June 02, 2020 01:01:13
040 - Improving Potato Chips and Space Travel: NASA’s Steve Rader on Open Innovation
Experiencing Data with Brian T. O'Neill
040 - Improving Potato Chips and Space Travel: NASA’s Steve Rader on Open Innovation

Jun 02 2020 | 01:01:13

/

Show Notes

At the same time, innovation will be critical for NASA if the organization hopes to remain competitive and successful in the coming years. Enter Steve Rader. Steve has spent the last 31 years at NASA, working in a variety of roles including flight control under the legendary Gene Kranz, software development, and communications architecture. A few years ago, Steve was named Deputy Director for the Center of Excellence for Collaborative Innovation. As Deputy Director, Steve is spearheading the use of open innovation, as well as diversity thinking. In doing so, Steve is helping the organization find more effective ways of approaching and solving problems.

In this fascinating discussion, Steve and Brian discuss design, divergent thinking, and open innovation plus:

Resources and Links

NASA Solve Twitter: https://twitter.com/NASAsolve

Quotes from Today’s Episode

“The big benefit you get from open innovation is that it brings diversity into the equation […]and forms this collaborative effort that is actually really, really effective.” – Steve

“When you start talking about innovation, the first thing that almost everyone does is what I call the innovation eye-roll. Because management always likes to bring up that we’re innovative or we need innovation. And it just sounds so hand-wavy, like you say. And in a lot of organizations, it gets lots of lip service, but almost no funding, almost no support. In most organizations, including NASA, you’re trying to get something out the door that pays the bills. Ours isn’t to pay the bills, but it’s to make Congress happy. And, when you’re doing that, that is a really hard, rough space for innovation.” – Steve

“We’ve run challenges where we’re trying to improve a solar flare algorithm, and we’ve got, like, a two-hour prediction that we’re trying to get to four hours, and the winner of that in the challenge ends up to be a cell phone engineer who had an undergraduate degree from, like, 30 years prior that he never used in heliophysics, but he was able to take that extracting signal from noise math that they use in cell phones, and apply it to heliophysics to get an eight-hour prediction capability.” – Steve

“If you look at how long companies stay around, the average in 1958 was 60 years, it is now less than 18. The rate of technology change and the old model isn’t working anymore. You can’t actually get all the skills you need, all the diversity. That’s why innovation is so important now, is because it’s happening at such a rate, that companies-that didn’t used to have to innovate at this pace-are now having to innovate in ways they never thought.” – Steve

“…Innovation is being driven by this big technology machine that’s happening out there, where people are putting automation to work. And there’s amazing new jobs being created by that, but it does take someone who can see what’s coming, and can see the value of augmenting their experts with diversity, with open innovation, with open techniques, with innovation techniques, period.” – Steve

“…You have to be able to fail and not be afraid to fail in order to find the real stuff. But I tell people, if you’re not willing to listen to ideas that won’t work, and you reject them out of hand and shut people down, you’re probably missing out on the path to innovation because oftentimes, the most innovative ideas only come after everyone’s thrown in 5 to 10 ideas that actually won’t work.” – Steve

 

Transcript

Brian: Welcome back to Experiencing Data. This is Brian O’Neill. Today I have Steve Rader on the line and I’m going to introduce him in just a second. But, as I’ve been doing in the last few episodes here, as a podcast host it always feels a little bit weird to jump into your specific domain, the topic that you cover on your show, without acknowledging what’s going on in the real world, outside. And we’re still in the middle of this pandemic and the COVID-19 stuff.

 

So, I just wanted to again take a moment, as I’ve been doing in the last couple episodes to thank all the essential workers out there for putting themselves on the line for us. And I do have them in mind when we record these shows, so I just feel like it’s necessary to mention that before we jump in. But without further ado, I’d like to bring Steve onto the call here, Steve, how’s it going?

 

Steve: It’s going great. Thanks for having me, Brian.

 

Brian: Yeah, so Steve’s from NASA, and I’m super excited to talk about innovation with you. And the first thing I’m going to ask you-I’m going to let you introduce yourself in a second-but, so you’re the Deputy Director for the Center of Excellence for Collaborative Innovation. So, it’s an interesting title, and so the first thing I wondered when I saw that was, it says, “Collaborative innovation.” And so, I’m wondering, does that mean that there’s a non-collaborative innovation and there’s, like, a meaningful distinction in that?

 

Steve: Yeah. So, it’s interesting that title. The title of our center of excellence was, kind of, selected before we came on, and it was in the early days of open innovation. And it really is about how do we bring people from all disciplines together to make innovation happen. And specifically, we focus on open innovation, which the big benefit you get from open innovation is that it brings diversity into the equation for innovation, and forms this collaborative effort that is actually really, really effective. So, that’s where it got its name and, I think, in the early days, before there was crowdsourcing and open innovation, open talent and gig economy, there was still some fuzz around what exactly we’re going to call this and that was one of the early terms that was used back in around 2010. 2011.

 

Brian: Mm-hm. So, you have a background in engineering correct? Like, you’ve had a notable career in the space program. So, tell my audience a little bit about your journey from-I feel like when I read your profile and I’ve listened to you before-that you’re on chapter two or act two of your life, moving on from maybe the hands-on engineering and management to this new space of open innovation. So, is that a fair read or tell me about that.

 

Steve: Yeah. No, that’s a great way to describe it. I’ve been with NASA for 31 years this year, and it started with mechanical engineering, coming out of Rice. I started at NASA as a flight controller, which was its own kind of flavor of engineering and Space Operations. I did that for-

 

Brian: Did you have a pocket protector?

 

Steve: [laughs], I won’t say-

 

Brian: I always think of Gene from the movies. [laughs].

 

Steve: Right? I actually worked for Gene Kranz. That was one of my first bosses and I worked on those green consoles that you see in the movies.

 

Brian: Oh, nice.

 

Steve: Yeah. So, that kind of dates me a little bit. But yeah, it very much was an interesting, thrilling kind of job. We were just forming space station and trying to figure out how we were going to operate it. I was in the life support systems, which-very mechanical engineering type of topic. I then became a software guy, and moved over to software development, and did flight software for about seven or eight years worked on projects like X-38, and time delay. Did some of the versions of VideoCon and file transfer work up to shuttle and eventually station.

 

Then I moved over communications architectures, and I was the main architect on the Constellation Program for how we would have interoperable communications-kind of net-centric if you will-communications for future programs. And then, I actually got the crowdsourcing bug before I actually got the job. I read Jeff Howe’s book, on crowdsourcing back in, I think, 2009, 2010. And it really turned me on to this idea of innovation and diversity, and really how these new platforms are enabling innovation at a level that just wasn’t conceivable before. And so, when I read that, I just realized this changes everything. And NASA needs these tools if we hope to stay competitive. And so, I just started diving in. I joined, like, five or six different communities. I started trying to understand why are people contributing? What do they have contribute? Who’s on here?

 

And about the same time, two folks at NASA, Dr. Jeff Davis, and Jason Crusan were actually piloting with InnoCentive, and yet2, and TopCoder on different projects to see, is this stuff for real? Does it really work? They had benchmarked with a whole bunch of companies like Procter and Gamble, and others and they were having great success with this. So, they piloted it. They had really great success with it, and so I started participating in some of the employee programs that were starting up. And then, in 2013, I happened to run into somebody that I knew well from a previous program that was involved in this, and we had a great conversation and she said, “You know, we actually are looking for a new deputy, and would you be interested?” And it didn’t take much for me to say yes there, because I really felt passionate about this. I felt this was something that the agency needed. And so, I just jumped in with both feet and have been working it ever since, trying to figure out how much is possible with this and trying new things, and we still have yet to figure out the full capacity of what’s possible here.

 

Brian: So, you’re in this innovation space, right? So, I think, for the audience listening to my show, I always try to imagine exactly who they are because you don’t always know exactly who it is. But, obviously there’s people in analytics, and data science, and technical product management. That’s who I think about when I program my guests and this type of thing. I sometimes wonder, though-and just from conversations I’ve had-I think some of these diverse teams, and innovation, and everyone likes to think they’re innovative.

 

And, at the same time, I think it can sound very hand-wavy to do people with very analytical minds. Hey, we had a bread baker join our AI team, and they come up with this incredible idea for improving my model accuracy by 15 percent. Like, come on. And so, how does a leader convince a board, or an exec team, or a boss that this diversity of thought matters, and secondly, how do you measure innovation competency if you’re not hitting home run projects out, like, wow, we revolutionized, whatever. How do you do that? I think that’s a challenge.

 

Steve: Yeah, there’s so much to this, oh, my gosh. So, get ready. Here we go. You just described the challenge of our job. So, NASA is populated with 60,000 when you count all of the Boeings and the Lockheeds and the big contractors, and these are people that they are brilliant, and they really are at the top of their game, and they’re innovative. And so, the message of, “Hey, you need to bring in diversity to be more innovative,” is really not received well by that crowd, for all the reasons you just cited.

 

And in fact, I even talk about “innovation eye-roll.” When you start talking about innovation, the first thing that almost everyone does is rolls their eyes. Because management always likes to bring up that we’re innovative or we need innovation. And it just sounds so hand-wavy, like you say. And in a lot of organization, it gets lots of lip service, but almost no funding, almost no support. In most organizations, including NASA, you’re trying to get something out the door that pays the bills. Ours isn’t to pay the bills, but it’s to make Congress happy. And, when you’re doing that, that is a really hard, rough space for innovation.

 

Because if you’re trying to meet a deadline and meet a budget, you don’t actually have a lot of time for innovation. You don’t have time for someone to raise their hand in a meeting and suggest something that you will then have to spend time and money on. You’re no just trying to get enough of the problem solved to get something out the door. And so, a lot of those teams are trying trained to immediately find the flaw in a problem or in an idea, so that they don’t waste time on it. And that makes it really rough for innovators, because innovators require an open space where you can actually put out ideas, and talk about them, and build on ideas that might not work, to find those ideas that do, to bring in that the people that are in different domains, to try to synthesize.

 

And it’s a harder process. And those two things, when they meet in the workplace, create conflict because they just aren’t compatible with each other. And so, what we tell people is, the people are all innovative, they all want to be innovative. But when you give them a priority of get this out the door, make your schedule, it’s really hard to still have that innovative context. And so, what we tell teams is hey, make a very conscious effort, every once in a while, to really switch gears and say, “Look, we’re going to have a half-day meeting or a two-hour meeting where we’re going to change the rules. And in this construct, we’re going to bring in diversity. We’re going to not say no. We’re not going to analyze why problems or solutions won’t work. We’re going to start trying to figure out, where are the solutions we need to be looking for.” Because sometimes there’s some low hanging fruit that you really can solve problems rapidly, which then help any problems along the way on projects that are trying to get things out the door, but you have to change the context.

 

And you have to change the conversation when you do that. And yeah, then you have to use some real interesting methodologies about analyzing the problem, and there’s some really great innovation methodologies, and you have to be open for new tools. And this is where open innovation comes in. Open innovation is simply a new tool that is really necessary now. What we’ve found is that, as innovators in a given domain, as problem solvers, you-as an engineer or a data scientist-need the best starting place that you can.

 

You would never try to create the best solution by starting with a 10-year-old computer, and not the latest and greatest data science libraries, or the data science methods. You would always start with the best tools. And we live in a time where technology has just exploded. And a lot of people, they hear it every day, they know it’s a time of rapid advancement, but they really don’t appreciate how big the changes, and so I have a couple of things that I tell people. Ninety percent of all scientists that have ever lived on planet earth are alive today.

 

Brian: Wow.

 

Steve: That is huge. And that is not something that was the case even 10 years ago. On top of that, if you look at the number of, say, patents that are being issued and applied for, a few years ago it was a few hundred thousand. It is over three and a half million a year now. If you look at the curve, we have passed the elbow of the curve of an exponential curve and are just skyrocketing. Same with PhDs. Same with the number of folks that have technical degrees. Around the world, countries have become wealthier, put in place technical education programs, and along with the rise in population, there are more capable people doing research and technology work than ever before.

 

And if you look carefully, you’ll see a lot of the technologies out there are the building block technologies that apply to almost every type of company. So, machine learning, cheap sensors, blockchain, even things like CRISPR, and drones are transforming almost every kind of company, every domain. And so, what’s happening and what we’re seeing is these things are coming together, such that every research and development area in these domains is actually taking these building block tools which, by the way, many of them are very inexpensive for someone to learn and to use. CRISPR is a couple hundred dollars to run CRISPR gene editing research. If you look at data science, a lot of those tools are free.

 

And so, that’s all brought the barriers down, and so lots of work is going on, both independent and corporately. And so, what’s happening is, say in agriculture, you’ll have somebody working on data science, and drones, and sensors in ways that they’re really advancing the technology in a way that really could be used in many other domains. But, say if you’re in medicine or space like we are, and you were to hear somebody’s presentation on the data science with these drones and sensors that they’re doing, you, you probably wouldn’t say, “Oh, wow, we can use that.” Because you wouldn’t understand the domain, you wouldn’t understand the context, and they would use different language.

 

And so, what we’re finding is, that’s happening everywhere. There are these latent solutions out there in the world that can make a huge difference for you in your one little slice, but you can’t find them. You can’t Google these technologies. They require someone that-you don’t know who they are-that has a little domain knowledge in this one area, like agriculture, and another in your domain. And they are able to connect the dots and recognize-see that same presentation and say, “Oh my gosh, this can be translated to your domain with a few tweaks, and then get you that 10x solution.” And it’s interesting because we are seeing this firsthand. We’ve run challenges where we’re trying to improve a solar flare algorithm, and we’ve got, like, a two-hour prediction that we’re trying to get to four hours, and the winner of that in the challenge ends up to be a cell phone engineer who had an undergraduate degree from, like, 30 years prior that he never used in heliophysics, but he was able to take that extracting signal from noise math that they use in cell phones, and apply it to heliophysics to get an eight-hour prediction capability.

 

And it’s this idea of bringing a technology from one area and bring it in. There’s another great one that Subsea 7, they work in the oil and gas field area, and they actually had this underground pipeline inspection technology that they would take a ship out for a few weeks at a million dollars a day. They would lower this van-sized piece of hardware next to the pipeline, and for two weeks, they would do a run to inspect the pipeline segment. And they did a simple search on NineSigma with their crowd to find alternative solutions, and within days, they had found a technology in the mining industry that was hand held that could actually do that same work in two hours. That is 100x improvement. And here’s what Subsea 7 said. They said, “If we hadn’t found this technology-not only is this going to be a huge moneymaker for us-but if we hadn’t found it, and someone else had, we would be out of business today.”

 

And that’s the big takeaway is that we live in this world where the technology is going so fast, and there’s so many solutions out there that can make a difference, they not only can make a difference to us but our competitors, the ones that are going to make us relevant or not by whether they’re competing with us, they are also out there and we’re seeing companies failing left and right. If you look at the last 15 years of our most successful companies, the Fortune 500, the companies that have made it onto that list, only half still exist. If you look at how long companies stay around, the average in 1958 was 60 years, it is now less than 18. The rate of technology change and the old model isn’t working anymore. You can’t actually get all the skills you need, all the diversity. That’s why innovation is so important now, is because it’s happening at such a rate, that companies-that didn’t used to have to innovate at this pace-are now having to innovate in ways they never thought.

 

And so, what we tell our folks is, “Look, you want the best tool, crowdsourcing is the best tool for finding the best starting place.” And the best starting place is often that place that gets you the technologies you need, gets the failures out of the way. That’s the other thing I tell people is crowdsource challenges, they not only bring all these diverse set of folks to help solve your problem, they also fail really fast and in parallel. So, if innovation requires failure, you get that for free in a crowdsource challenge.

 

There’s a case study I talk about a lot where Roche diagnostics, this big pharma company, was trying out this stuff for the first time, and they brought their 10 top unsolved problems. And this one problem they had worked on for 15 years to get this one diagnostic tool to work, they ran a 60-day challenge with a $20,000 prize and it gets solved. But what blew them away was when they looked at all of the solutions that were submitted, everything that they had tried in 15 years of R&D-and proprietary R&D-had been replicated in 60 days. They got all that failure in a 60-day challenge. And that was when that crowded InnoCentive was only about 120,000 people. It wasn’t even nearly-it’s, like, 400,000 now. The large numbers thing brings diversity and bring that at a scale, that you start to up your odds of getting successful solutions, and you get all that failure for free.

 

Brian: Is there a particular attitudinal position, or personality, or skill that a leader who’s working inside a company needs to have to farm that field that you just talked about? I feel like you need a certain type of person to open this door up and to know how to, for example, maybe we won’t put out, like-the ultimate challenge of what we’re building, we won’t put out, but we know how to curate just solving the wheel of the car. We’re not going to allude that we’re building a car, but they know how to tend to this garden. Is that a person or a personality that you think of that learns how to do this, and helps the company through that journey? I’m thinking about the internal employee. What does that person look like?

 

Steve: I think it’s got to be somebody who really is looking around-I tend to think it’s a generalist. What we’re seeing now is specialists have a lot of hubris around expertise, and there’s a lot of confidence that the best in the field is the way to go. And it has been for a long time, and there’s value in that. However, somebody who sees what’s going on in the rest of the world, and in this innovation space, starts to see that there’s a balancing act where you really have to start bringing in the best tools. And these aren’t the traditional tools.

 

You’ve got to start to bring in folks that you wouldn’t have before, just to be able to even scratch the surface on some of these. Partially because a lot of the newer solutions are multi-domain, they’re not-machine learning for instance, is going to be part of almost every industry in everything. There’s a great book, Karim Lakhani and Marco Iansiti just wrote, called Competing in the Age of A.I. that lays out this, just-once you read it, it’s like, “Oh, my gosh, if your company is not trying to actually put this digital backbone together with an AI factory that’s going to actually be constantly improving, you’re probably not going to be around in five to 10 years.”

 

Brian: Mm-hm. I’m recording him next week actually. [laughs].

 

Steve: Oh, really?

 

Brian: Karim’s coming up.

 

Steve: Karim’s just amazing.

 

Brian: I went to his book release.

 

Steve: We worked with him for about 10 years, actually. It’s funny, we call ourselves the NASA Tournament Lab, and originally they had named their lab that and we actually had to reclaim it. There’s a whole thing there.

 

Brian: That’s funny.

 

Steve: But, we have worked hand in hand with his group and continue to this day because they have just some amazing insights. But yeah, innovation is being driven by this big technology machine that’s happening out there, where people are putting automation to work. And there’s amazing new jobs being created by that, but it does take someone who, kind of, can see what’s coming, and can see the value of augmenting their experts with diversity, with open innovation, with open techniques, with innovation techniques, period. A lot of time, the expert teams we work with, they just want to dive into solving the problem, and what they don’t realize is they’re much better off spending a significant amount of time actually figuring out what the problem is. Problem analysis-

 

Brian: Oh, thank you for saying that.

 

Steve: Oh, right? I mean, I love the Einstein quote. “If I had an hour to solve the world’s problem, I’d spend 55 minutes figuring out what the problem and five minutes solving it.” I think that’s such wisdom because it really helps you focus, it helps you decompose the problem. A lot of times, we tell people, “Look, if you really want to solve a problem and make progress-” and solving a problem is an interesting, just statement in and of itself because what you’re really doing is you’re increasing the performance of a solution. That’s trying to solve a problem.

 

And so, you’re constantly trying to get the best performance, the lighter, the less weight, the faster, whatever it is. And so, it’s this moving target, and a lot of people don’t, kind of, decompos-“Well, what’s keeping me from that high reliability? What’s keeping me from that low power? What’s the one component that’s the big hog, that if I can just go innovate on that one thing, I get a 3x improvement in the whole system?” Things like that, I think are not played up as well. People don’t tend to want to take the time to do structured and facilitated problem work. And oftentimes, engineers have a little hubris where they think they can just do that, and they don’t draw on the right resources.

 

I find resources like Google Ventures Sprint, or there’s books out there by Tina Seelig, or Ramon Vullings, where there’s cross-industry innovation techniques. And these are all really great ones: to have a facilitator that knows had a facilitate a session, run through and work. And there are these tools, and you got to learn to use the latest and greatest tools. And I think, in the data science community which I think you focus on, that’s a well-known fact. I mean, you know you’ve got to bring in the latest and greatest toolsets if you’re going to want to be successful in the data science area. I would just say that extends out into the open innovation world where those are tools to bring in.

 

It’s kind of funny, this may be a little off-topic to your question, but in data science, I actually took a short course and we actually did some machine learning, and it became very clear to me why open innovation works so effectively on Kaggle and TopCoder, and DrivenData and these platforms that are crowdsourcing and doing challenges around data science because successful data science, at least you’re getting that right start. You have so many variables, and so many permutations, and so many ways to chop the data up. And you do this trial and error as part of what you do. Well, a crowdsource challenge basically puts that all on steroids, so that you get lots of people trying lots of techniques, and lots of different ways to basically parse the data and come up with different linear regression approaches and variables. And they come in with that, and what happens is the successful stuff floats to the top and wins.

 

And so, then as a data science, running one of these challenges then gets you hand-delivered a starting point, along with probably four or five other techniques that didn’t win but actually came close, that you can then start to use combinatory effects to get an even better answer. So, we often tell our folks, this is the tool you need to get the best starting point because when it comes to this, this 5x, 10x, this idea of multipliers for your solution, what we say is your starting point is the determination of how far you’re going to go. It’s that multiplier. And that starting point, if you aren’t willing to go put the work in, it’s kind of useless. An innovative idea that somebody poses out there that no one grabs ahold of, and then does the hard work of implementing that, and bringing it to market is useless. So, we talked about that 1%, 99% perspiration. The 1% innovation or inspiration, the 99% perspiration. The part to that, that’s really important is that 1%, while it’s a tiny slice of the effort, ends up being the multiplier on how effective that solution is going to be. So, finding that right starting point is hugely important.

 

Brian: Are you tired of building data products analytics solutions, or decision support applications that don’t get used or are undervalued? Do customers come to you with vague requests for AI and machine learning, only to change their mind about what they want after you show them your user interface visualization or application? Hey, it’s Brian here. And if you’re a leader in data product management, data science, or analytics and you’re tasked with creating simple, useful, and valuable data products and solutions, I’ve got something new for you and your staff. I’ve recently taken my instructor-led seminar and I’ve created a new self-guided video course out of that curriculum called Designing Human-Centered Data Products. If you’re a self-directed learner, you’ll quickly learn very applicable human-centered design techniques that you can start to use today to create useful, usable, and indispensable data products your customers will value. Each module in the curriculum also provides specific step-by-step instructions as well as user-experience strategies specific to data products that leverage machine learning and AI. This is not just another database or design thinking course. Instead, you’ll be learning how to go beyond the ink to tap into what your customers really need and want in the last mile, so that you can turn those needs into actionable user interfaces and compelling user experiences that they’ll actually use. To download the first module totally free, just visit designingforanalytics.com/thecourse.

 

Brian: You talked about this problem definition thing, which is something that I think is repeated on this show quite a bit. As a product designer and a consultant, I would say the number one problem that comes to me, from me running my business standpoint, is-it’s not a problem, it’s something I’m just used to all the time, is that someone comes in with what they perceive to be a problem, and the reality is, they usually can’t define the problem space very well to me, such that any possible solution, we could even measure a solution-we wouldn’t be able to agree on how to measure it because we don’t really have clarity on what the exact problem space is yet, and so a lot of the human-centered design process is really about clarifying that problem space, and then I always feel like the solutioning part is so rapidly accelerated when you have that clarity of thought. Is this similar to how you see the innovation space?

 

Steve: Yeah.

 

Brian: I feel like it’s a major challenge. It’s, “We need to build a model,” or whatever. It’s like, “No. a model by itself is an output.” We talk about outcomes over outputs. The outcome from the model is the thing we want. What is the outcome that we seek? That is tied back to the problem. The model, or the dashboard, or whatever the analysis is, or a SaaS product, those are all things that potentially generate an outcome that is desirable, but if we can’t define the problem space well, and what the outcome is, we’re just going to focus on building stuff. I don’t know, what’s your take on that?

 

Steve: Oh, no, that’s 100 percent right. We see that all the time, where we’ll do workshops with groups where we really say, “Look, what is it that you’re trying to do?” And we’ll get these convoluted descriptions. And one of the first things we try to do is say, “Look, just describe to us what is the performance of your current product? Like, what is your state of the art that you’ve delivered, that is the thing that’s your baseline that you’re improving from?” And it’s remarkable to me how many teams don’t have that off the top of their head. They don’t know how much power their system takes, or how fast it performs, or what kind of reliability numbers.

 

And what we tell people is, “Okay. Now, once you understand that, now talk about what success is. What does that look like? Imagine somebody coming in and undercutting you, and basically having a far superior-what is that just out of reach set of performance parameters? Is it 10x less power? Is it instead of having a mean time between failure that’s three months, it’s three years? And describe what is it that you don’t think is quite achievable? That’s your new goal.” Because now you can start to look at the gaps and ask the questions. What’s keeping me from having something that can run continuously for three years? What’s keeping me from having something that works on Picowatts rather than Watts? Oh, what is it? What are the components? What are the problems? And that starts to give you concrete problems, gaps, right? And that’s where you can really focus innovation efforts once you define that, which I think that’s all part of that problem analysis piece.

 

It’s not for everything. It kind of depends whether you’re trying to develop your strategic-what do I need to go attack to get the best competitive product, or sometimes we have people that just simply have problems that are keeping them from production. Where they simply need to solve a problem. And in those it’s a little easier to then take that and really analyze that to get it out. The other part that we find that’s really important, especially if you’re going to go to crowdsource something is that you don’t always want to pose the problem directly.

 

There’s a great story I tell, probably every time I talk, is a potato chip producer-I think you’ve probably heard this one-where they came in asking for, “How can we get grease off of our potato chips?” In the very first thing that the innovation company did, was they reworded the problem statement to be, “How do you remove a viscous fluid from a delicate wafer?” And by doing that, it invited that diversity. So, now when people out in the world look at problems that if you just said, “How do I get grease off of potato chips?” Most people would say, “Well, I’m not a food production engineer. I’m not a food scientist, so this isn’t for me to work.” But by broadening that statement, you then invited the diversity of people working in silicon wafers, people working in biology, people working from all different domains, and that’s really where the solutions that you don’t know exist because, within a given domain, you tend to have plowed the fields that you know. And it’s the things you don’t know you don’t know that-

 

Brian: The unknown unknowns.

 

Steve: Exactly. And Warren Berger has this great book called A More Beautiful Question where he really says, “You know, within a discipline, there are these gatekeepers, and the discussion is actually much more rigid than you would think.” So, in that one case where I was talking-vibration was the solution that food production engineers had. They would basically vibrate the tray of chips as it came out of the vat of oil to shake off the oil, but that would break a bunch of the chips, and so they were looking for the solution. Well, it turned out the solution that came in that won was to vibrate the air around the chips at the natural frequency of the oil, [00:36:57 unintelligible] the harmonic and the grease would just fly off the chip. Well, that too was a vibration solution. But all those mechanical engineers who that was their specialty, missed it. They didn’t see that. And so, we talk about that bubble that forms around a given domain.

 

And it was funny, we did a challenge for NIST recently on differential privacy, and that whole domain, there’s a sub-research domain out there in the world at universities and in companies working this problem. And they really didn’t think that any sort of crowdsourcing challenge would work because they just felt that that wasn’t going to take them down a path that would be helpful. And they ended up doing this challenge on TopCoder, and it ended up being really successful, and really changed the direction of how they planned to go attack this problem based on that challenge. And one of the telling pieces of feedback we got was some of the competitors said, “We are actually researchers. We work for these lab managers, these PhDs at various universities, but they never would have let us work on this solution. They direct which solutions they want us to go after, and so we could work on things that they didn’t approve, and we could fail on them with very little consequence.

 

And that’s one of the things that it takes for innovation, is you have to be able to fail and not be afraid to fail in order to find the real stuff. But I tell people, as well, is if you’re not willing to listen to ideas that won’t work, and you reject them out of hand and shut people down, you’re probably missing out on the path to innovation because oftentime, the most innovative ideas only come after everyone’s thrown in 5 to 10 ideas that actually won’t work. But you actually have to hear those and they spark something new that then helps you make the connections to the thing that will work. And so, that kind of listening to failure and being ready for failure is a really important piece. I know I keep diverging off your question, sorry. [laughs].

 

Brian: No. Well, you came back to something that I wanted to ask you about. And I love this quote, “Mean time between failure,” and I was like, “Bang. That sounds like a metric of measurement.” And I had asked that earlier: how could an organization or leadership measure whether or not we’re making some progress with innovation when we’re not getting the potato chip win-and I love it, by the way, that potato chips actually help us because I love potato chips. [laughs]. So, but tell me about mean time between failure and other metrics of implementing open innovation. How do we measure that we’re trying stuff and finding some way to quantify that for the analytical mind, and the executive mind that wants to see ROI?

 

Steve: Sure. I think that’s exactly right. I think a lot of people just think about, “Oh, I’ve got a better product,” but they don’t take the time and effort to really come up with measurable metrics that they can be-so in mechanical engineering, there is this mean time between failure, which is based on a statistical model and testing, where you get some distribution, and you come up with this number that basically is a duration. Most components, when you put them out there will last say three months, and then you’ll start to get a failure. And so, that’s actually a pretty standard piece out there.

 

But I think there’s just as many of those in say, user interfaces. How long does it take someone to go through your user interface and use it? How many times do they make mistakes? How efficient are they? What is the capture rate if you’re trying to get them to fill out a form completely? Things like that. In the data science, there’s really clear metrics, typically. You’re always doing this kind of error rate that you’re trying to get for the prediction, and in data science, you’re sitting there trying to figure out what are the important metrics. But yeah, in performance, especially when you’re trying to look at how the data science or how the technology is going to improve the product, you really have to nail down what is it that the customer wants? Does they want this to be faster? They need it to be lighter? Do they need it to be less power? Do they need it to be more reliable? And oftentimes, it’s all of those things.

 

I use, as my example, a drone. If you’re trying to build a better drone, what do you need that to be? Well, you need it to be able to fly in any weather. So, waterproof it. How long can it do that? What’s the highest wind gust it can survive in? It has to last long. Drones now have gone from, like, these twenty-minute drones to two hours and beyond, and there’s drones everywhere. But, the more payload you can take, things like reliability, we talk about how fast can you switch it out? How easy is it to use? Is it compatible in standards to other things that it may need to pick up or camera mountings or whatever? All of those can be measured in-I always think in terms of friction. The overall consumer model is, I have a need or a desire, and I have something that can meet that. And what’s the friction that’s keeping me from the absolute best achievement of those desires?

 

And that’s often the cost of what it takes to bring that in, and how good that that product is. And so, often looking at well, what is it that is really not possible that would be this, kind of, Star Trek version of it. And that helps you tune in to what is important. But I will tell you, people struggle with metrics. People struggle with measurements. It’s funny because part of this whole effort has really turned us on to the new kind of emerging world of work, which is that there is this move to freelance work, where almost more people will be doing gig work than working for companies in about seven years. And in fact, this COVID virus has probably sped that up.

 

And what’s really fascinating about that is, you find these platforms where gig workers are finding work, and they’re using machine learning. They’re using different ways to match people to fit the work, and you’re finding that they’re recording data. They’re finding metrics in people’s performance that HR departments never tracked. And so they, to basically find how fast people are working, how well they work with different personality types, very specific and granular measurements of skills and certifications. And they’re using all that to capture individual worker’s digital exhaust and using that to feed it back into machine learning to find matches with what people need.

 

And this is opening up the possibility of anyone being able to spend very little money to go out to one of these platforms and say, “Hey, I need someone who can meet with me for two hours to help me with this really technical problem on how to use quantum computing, or how to use this new widget, and literally pay $500 to bring in an expert, help get you going on the latest and greatest technology, and then pop out and you didn’t have to go through five months of HR pulling in some new expert that you really didn’t have budget for. I mean, there’s just a whole new model for how to deal with these new skills and new technologies that is really part and parcel of what I call the open innovation world, which is open talent. And so, contests are great if you don’t know what the solution should be or what skills you need to be applied to this because it actually can cast a wide net to this big network of people. But if you know who you need or what skills you need, then these other platforms are really great ways to just focus that and get an intent-skill and expertise brought to bear on the problem.

 

Brian: Right. But I would say-you can argue this back to me if you disagree-using open innovation to figure out a strategy, like, we know that we need to use AI in our company. We don’t really know where to start. To me, that’s a problem definition because you don’t even know what you want to use AI for. That is not the kind of thing you send out, or maybe you do bring in expert help to help you figure out what it is. It’s not that they’re going to come and tell you what it is, but they can facilitate the exercise of getting to it. This is work that I do quite a bit. The solution space is a place to go out and try and fail rapidly, potentially in this open market, once you know what the problem space really looks like. Is that a fair summary or…?

 

Steve: Yeah, it was interesting. I was in a forum with Open Assembly yesterday, where Mike Morris, CEO of TopCoder, said, “You don’t want to ever try to outsource your thinking,” the brains of your operation. You can’t do that. But what you can do is you can bring in expertise to help you either, like you said, facilitate, do something like the Google Ventures Sprint methodology. Great way to figure out what your organization’s focus and strategy should be. It’s a week long. They’re facilitated folks. It’s a very straightforward methodology, and it brings to bear a lot of this innovation and diversity piece. If you are bringing together a team and are facilitating, and you know that you’re going to be headed down a path that you don’t have necessarily all the skills and expertise for, using some of these freelance folks to bring in and say, “Hey, I just want to hire you for this session, this three-hour session, to bring your expertise into it and hear what we’re saying and help us get direction.”

 

You have access to some just amazing people out there. And why not tap into that? And being cognizant of, hey, we’ve got a very technical team here. Let’s bring in an artist that also has knowledge about what we do. Or bring in someone from HR that also has a little bit of domain knowledge in our area, so that you get these people that can help you connect the dots to things where somebody else may have solved this problem already, or doing it the right way already. And why re-plow that ground? If the solution is out there, go find it. And I think that’s-a lot of times, that piece of it is underserved. People think that their domain is so unique and what they’re doing is so unique, that they have to solve it all. And what you forget is, it’s a big bad world out there. And there are often solutions that are public-or that someone has experience with-that they can help bring along and help you with.

 

Brian: Yeah. I have to just reiterate that because I think it’s so critical to have that problem space well defined, and sometimes I feel like it is easy to go native on this. This is something I deal with-I’m not vertically specialized in my work, I’m kind of horizontally specialized across data products, so I touch a lot of different industries, and when someone hires me, I think it’s the fact that a lot of times it’s like, “Okay, I don’t know anything about patents yet, so we’re going to talk about”-and I had actually a client that worked in this collaborative patent research space, they did technology forensics and things like this. But you bring a set of thinking and experiences that are different, and so by coming in with that fresh perspective-and you have some knowledge it’s not that you’re just totally random fit-we didn’t go out and just pick someone with a completely different skill set and no context for what you’re doing, but the point there is to get the problem space super well-defined so that you can help the internal team figure out, “Okay, this is actually what we need to solve for. We had in our head we’re building X thing with patents or whatever.”

 

It’s like, “No, actually, what we’re trying to do here is do this other thing,” but you have to, kind of, dive in and then pull back out, and then you come out with this strategy or this, what I would call a design strategy, which is a plan for the work that needs to go happen, which probably includes some research, and it probably includes some further refinement of that problem space, and it’s a foreign concept, I feel, to some, and it takes an aspirational leader who can realize this sometimes, that you might be really too close right now to your own thing that you live-I mean, this is your job. You live, eat, and breathe this every single day, and that might be part of your challenge.

 

Steve: Yeah, well, and I think there’s several pieces. I think there’s a balancing act that a lot of folks have. a lot of people that are very specialized, have to pull back out: they really need a generalist, somebody who can make the connection-actually reading a great book by David Epstein called Range that is all about the value of this multi-discipline type of person because what we find now is the complexity of our world is increasing along with all these technologies. Now, everyone is required to string together many, many more specialties than they ever had, which is really breaking the old HR model of, I’m going to go get the handful of skills I need to go make my product, and I’m just going to hire him, I’m going to capture him, and they’re going to work for me. There’s a couple of flaws with that.

 

One, it’s almost impossible to hire all of the expertise. Just think about IT. 10 years ago, you could hire one person that really understood enough of the IT world to service an organization. And now you got cybersecurity, cloud work, AI, software. It just is this endless set and you don’t actually always need a full person.

 

And so, how do you actually get this hybridization of all the skills you need? And this is where this new freelance economy starts to start bearing fruit because what you really need is your core to do the thinking and to know enough to go hire all the specialties. But if you try to hire all of those specialties in house, that a) you’re not going to be able to because there’s so many of them, but b) as soon as you hire them, you’ve now captured them and taken them out of the learning curve in a lot of cases. Most companies, average spending on training is $1,000 per year per employee. Well, look around at what’s going on in the world. Things are changing so fast that almost everyone needs to be spending time learning.

 

Actually met one of TopCoders lead freelancers and was talking to him, and I said, “How much time do you spend learning new technologies and just trying to keep up?” He kind of shocked me. He said 60 percent of his time. And I said, “Wait, how do you have time to do work and make money?” He’s, “Oh, I do fine.” Because he actually makes, I think, six times the average salary of anyone in Greece, where he works. He’s like, “I’m doing fine there.”

 

But the model has changed. And if we really want people to be high performers, I don’t know if it’s 60 percent, but they actually need to be training and learning much more than we give them time. A lot of people don’t realize the metrics behind keeping employees. Most people that work don’t realize their bosses know this, that they cost the company two to three times their salary, because those people have to have HR, and offices, and air conditioning, and security, and IT, and all of that costs money. And the sad part is the average efficiency of the workforce is 37 percent. Three out of every eight hours a day. And that’s not because people are lazy, it’s because we burden them with all sorts of IT, and training, and compliance, and staff meetings, and we caused-

 

Brian: Let’s have a meeting to talk about that, Steve. [laughs].

 

Steve: Right, exactly. And the reason this is all going to come to a head is because when you can reach out and still pay a freelancer two to three times what you pay your internal employee and still come out ahead, that’s going to start to make a difference when you have to actually have those expertise if you hope to be competitive. And so, there’s this remixing of what the workforce means, and where-I see HR and middle management, those functions moving to the cloud if you will. And these new labor platforms are going to be able to do that much more efficiently, both in the people development piece, as well as finding you that diversity of skills you need at the moment to get the complex work done. And it has to have lifelong learning just built in.

 

So, there’s a crowd called Paro.io that does finance and accounting, and they use machine learning to match their freelancers to work. But when they do it, they actually try to actually match somebody to something where they’re on the lower end of the skill, so that by the time they finish that project, they’ve actually learned and reinforced a new skill, so that every time they do a new task, they’re getting smarter. That is brilliant because we live in a world where if you’re not keeping up with skills, and adding to your skill base, and upskilling, you’re probably at risk of being laid off and having a really hard time living in this world.

 

Brian: Yeah. Steve, this has been a great conversation. Where can people learn more about your work and follow you? Are you on social media or LinkedIn? What’s the best way to do that?

 

Steve: Sure, yeah, I’m on LinkedIn, Steve Rader. If you just put in Steve Rader, R-A-D-E-R, and NASA, you’ll find me. If you go to nasa.gov/solve, you’ll find where we post all of our challenges, and all of the work we put out there for the crowd, and you can learn a little bit more about what we do. Yeah, I’m on Twitter, SteveRader, @SteveRader. We have @NASAsolve too if you want to follow the official tweet. So, yeah, we’re out there. We’re talking. And I really appreciate you having me on. This is fun to talk about.

 

Brian: Yeah, it’s been really great. I will definitely put all those links in the show. Any last thoughts for this community of product managers, and data scientists, and analytics leaders that are listening right now?

 

Steve: I would just say this. You may have heard some things today that, kind of, makes you uneasy. That change is really overwhelming and the shifts in labor markets make everyone feel like, “Oh my gosh, the 40 hour work weeks going away, what am I going to do?” I will tell you that automation and all this change is very scary, and the more I work it, the more hopeful I become that the future that we’ve got in front of us when we adapt to it over the next 10, 15, 20 years, is actually got a lot of really great stuff about life-long learning about really working your passion, and leaving all that bureaucracy behind, and that these platforms are connecting global markets. So, the robustness of work out there that anybody can do is really much larger than people think. There is lots of work-and new and interesting work that’s coming online every day. If you just immerse yourself in the idea of, “I’m going to learn, I’m going to try new things, and I still will be okay,” I think that is coming, and it’s going to be an exciting, and pretty amazing time. Because people will pursue more different things through their life, they themselves will start to internalize that diversity that we talked about. It’s going to make individuals more innovative. So, I see some really exciting stuff here, and I feel great about the future.

 

Brian: Awesome. Thank you so much for coming and talking to us today about open innovation. I really appreciate it.

 

Steve: Oh, this has been great, Brian. Thanks for having me on.

 

Brian: Cool, cheers.

Other Episodes

Episode 0

June 16, 2020 00:44:05
Episode Cover

041 - Data Thinking: An Approach to Using Design Thinking to Maximize the Effectiveness of Data Science and Analytics with Martin Szugat of Datentreiber

The job of many internally-facing data scientists in business settings is to discover,explore, interpret, and share data, turning it into actionable insight that can...

Listen

Episode 0

November 21, 2018 00:05:14
Episode Cover

000 – Welcome to Experiencing Data

Hey, everyone. I’m Brian O’Neill and I’m excited to share my new podcast with you called Experiencing Data. I’m a consultant specializing in design...

Listen

Episode 0

January 26, 2021 00:56:50
Episode Cover

057 - How to Design Successful Enterprise Data Products When You Have Multiple User Types to Satisfy

Designing a data product from the ground up is a daunting task, and it is complicated further when you have several different user types...

Listen