013 - Paul Mattal (Dir. of Network Systems, Akamai) on designing decision support tools and analytics services for the largest CDN on the web

May 21, 2019 00:44:35
013 - Paul Mattal (Dir. of Network Systems, Akamai) on designing decision support tools and analytics services for the largest CDN on the web
Experiencing Data with Brian T. O'Neill
013 - Paul Mattal (Dir. of Network Systems, Akamai) on designing decision support tools and analytics services for the largest CDN on the web

May 21 2019 | 00:44:35

/

Show Notes

Paul Mattal is the Director of Network Systems at Akamai, one of the largest content delivery networks in the U.S. Akamai is a major part of the backbone of the internet and on today’s episode, Paul is going to talk about the massive amount of telemetry that comes into Akamai and the various decision support tools his group is in charge of providing to internal customers. On top of the analytics aspect of our chat, we also discussed how Paul is approaching his team’s work being relatively new at Akamai.

Additionally, we covered:

Resources and Links:

Akamai

Twitter @pjmattal

Paul Mattal on LinkedIn

Paul Mattal on Facebook

Quotes from Today’s Episode

“I would say we have a lot of engagement with [customers] here. People jump to answering questions with data and they’re quick. They know how to do that and they have very good ideas about how to make sure that the approaches they take are backed by data and backed by evidence.” — Paul Mattal

“There’s actually a very mature culture here at Akamai of helping each other. Not necessarily taking on an enormous project if you don’t have the time for it, but opening your door and helping somebody solve a problem, if you have expertise that can help them.” — Paul Mattal

“I’m always curious about feedback cycles because there’s a lot of places that they start with telemetry and data, then they put technology on top of it, they build a bunch of software, and look at releases and outputs as the final part. It’s actually not. It’s the outcomes that come from the stuff we built that matter. If you don’t know what outcomes those look like, then you don’t know if you actually created anything meaningful.” — Brian O’Neill

“We’ve talked a little bit about the MVP approach, which is about doing that minimal amount of work, which may or may not be working code, but you did a minimum amount of stuff to figure out whether or not it’s meeting a need that your customer has. You’re going through some type of observation process to fuel the first thing, asset or output that you create. It’s fueled by some kind of observation or research upfront so that when you go up to bat and take a swing with something real, there’s a better chance of at least a base hit.” — Brian O’Neill

“Pretend to be the new guy for as long as you can. Go ask [about their needs/challenges] again and get to really understand what that person [customer] is experiencing, because I know you’re going to able to meet the need much better.” — Paul Mattal

Episode Transcript

Brian: Hi. We’re back with Experiencing Data here and I have Paul Mattal on the line who is currently the Director of Network Systems at Akamai. How’s going, Paul?

Paul: It’s going great. Thanks, Brian.

Brian: I’m glad to have you on the show and you’re working at one of these companies that I think of as kind of like oxygen in the internet. It’s everywhere but you don’t really see it because it’s all invisible and that’s actually this big thing behind the scenes. You’re swimming around the internet as all these data and Akamai’s in the middle of all of a lot of that, largely responsible for making sure it’s moving quickly and is available at the right time and in the right places.

As I understand it, you’re in a new position, you’ve changed domains, previously you were working in the space of legal patent work, digital forensics, and you built some tools that your previous company makes. You can tell us a little bit about those. Now, you’re moving more into the bits and bytes of the internet and you’re responsible for creating data products like decision support tools for people that keep the Akamai network going and running smoothly and anticipating demand? Did I get all that right?

Paul: That’s exactly right. At Akamai, we like to think of it as we’re the ones that make the internet work. There’s a notion that the way things work on the internet is you just simply put your content up on a server and the rest is history. But these days, there’s a lot of complexity. There are many, many users who want access to the same content at the same time. Akamai makes that content all available to everyone when they need it and how they need it.

In my past job, as you mentioned, was quite a bit different, although it had some similar qualities. I was helping to develop systems and tools for lawyers and for consultants for lawyers, in some cases to analyze patents, to help them better understand their subject matter of patents, so we’ve created some applications there.

Here at Akamai, I’m also creating applications and tools to be used by the members of the network’s team who are responsible for deploying and maintaining the whole Akamai network. That breaks down roughly into tools that help us manage our work, tools that helps us with analytics and planning, and also tools that help us visualize data. It is somewhat of a shift. A lot of the domain knowledge is different, but it’s interesting that so many of the problems end up being similar.

Brian: Tell us a bit about who the end-customer is. How many internal customers do you have? Do they break up into personas or segments? Like you have network administrators and you have whatever people. Tell us a bit about who those people are that you’re designing these tools for or you’re helping deploy these tools for.

Paul: There are a couple of groups. The infrastructure group which is responsible for really deploying all of the servers and maintaining all of the servers. That’s a set of one class of user who is mostly using our tools in a logistical fashion to coordinate and organize their work. There’s a planning team who is thinking about the capacity of our network: Do we have enough for what’s coming down the pike? Do we have the right capacity in the right places? We also have users who are thinking about the architecture of the network and thinking about how we build and optimize our hardware and our network, to continue to be cutting edge and to continue to meet the needs of our customers. So, different people looking at different tools and different data for different purposes.

Brian: Cool. Just a little fun question here. This is probably because I don’t know the domain very well. When there’s a big event coming on the internet, let’s take something like the Super Bowl, or the World Cup, or the new Game of Thrones, or whatever, are there literally changes that you guys go and make to facilitate a major event? Or are those actually more like a blip in terms of internet traffic and all of that?

Paul: It depends. Certainly, some of those events have been some of the largest data traffic we’ve seen move across our network. Often, there are considerations especially depending on where exactly we expect the viewers to be for those events. We may deploy additional capacity in one geographic area or another.

Brian: Going back to the people that are the end of these tools—again, these are decision support tools—how do you know if your team is doing a good job? How do you measure that the end-customers are getting the right information and they believe it, that they’re willing to take action on it? Do you a regular feedback cycle or interaction with these different personas that you talked about?

Paul: Yes, That’s one of the most important aspects of what we do is trying to figure out how to measure, how exactly to measure how we’re doing, especially in the analytics space, right in the productivity tool space is a little simpler.

We can tell pretty much where the pain points are. People come to us and say, “This interface isn’t working for me or these five things are in five different places,” and they’re going to use them as one. Those are a little bit more straightforward kinds of feedback.

With analytics, we find it goes a lot to how successful were we predicting, how much excess capacity did we end up within a place we didn’t need it, for example, and all those kinds of questions. We meet with our customers pretty regularly and we also have some metrics that we compute to give us an idea of how we’re doing.

Brian: Are those quantitative then? Those are all quantitative metrics or do you have any type of qualitative conversations that go deeper than like, “I wish there was a filter for the date on this chart,” or stuff like that. Those things do matter and it’s the sum of all those little, tiny details that add up into good experiences typically, but I’m curious if you have any deeper qualitative type of interaction with these end-users.

Paul: A lot of what we’re discussing these days, for example, is there’s a tremendous amount of telemetry available that comes off the platform. Numbers about what’s going on in the network that could measure and we could capture.

In many cases, a lot of the conversations are about, “Hey, can we capture more of this data? Is there’s somewhere we can get sample more frequently?” or, “Can we get access to this kind of data that we don’t have right now, so that we could be able to optimize more effectively on the things that actually matter, where the actual bottlenecks are in the network,” versus more simplified models based on less data. We’re finding that’s one of the very common kinds of feedback we’re getting is for more data and differently sampled data.

Brian: We talked about this a little bit when we did our pre-call on whatever about topics and you mentioned that you have different classes of users in terms of who’s capable of designing an effective tool for themselves. I think you said you’ve got a mix of tools that are custom-built which might have two-way interaction, where data’s being put back in through forms or whatever in the tool. Then you have Tableau and some kind of rear-view mirror type historical reporting interfaces which, as I understand it, those start with the user a blank slate? Is that correct? Then they put together the views that they want and the reporting that they want?

Kind of curious just for you to talk about how many people are using custom tools that you built versus the ones that they designed for themselves. Are people doing a good job creating the tools they need for themselves? Do you have a sense of that feedback that they’re looking at the right data, that they know how to interpret it, they know how to visualize it? Can you talk a little bit about that?

Paul: Sure. Our organization has hundreds of people in it and I would say at least probably 50%–75% of those users are highly technical, which is very helpful, actually. They often come to us with a better idea of what they need. In some cases, we can give them good interfaces to go build their own tools.

The historic approach to that here has been to give them pretty decent access to the data in our databases and even the engines themselves. Many of them are comfortable writing their own queries. But we also have a very mature ecosystem of query exchange. We have this tool that allows people to write their own queries and share them with others, and then others can manipulate those queries further and customize them to their own needs. They’re very familiar with that.

The piece we’re bringing in next is this idea of really making visualization also of a self-serve kind of area where, with a tool like Tableau, you can point Tableau at the same data that might be the out part of these queries but then have powerful visualizations on top of that.

The other piece of this is how much of it do we do and how much of it do customers create from old cloth. It’s kind of a balance. Some people come to us and say, “Here’s what I need but I don’t know how to do it,” and then they ask us to do it. Sometimes a customer actually originate it and will say, “Here’s the report or this query that I think is interesting,” and we’ll say, “Oh yeah, that’s interesting. Why don’t we bake that into something more sophisticated?”

It’s kind of a mixed bag but I would say most people come in to us, there’s usually something that we already have that they can use as a basis and then they can usually modify that further. That’s been a pretty successful model for us because it really lets people get what they want, get the very detailed, precise view that meets their needs, but benefit from all of the other work that we’ve put into to making those views and those approaches effective and mature over many years.

Brian: It sounds like you don’t struggle as much with engagement with the analytics. You actually have plenty of that? Or would you say that’s not necessarily entirely true?

Paul: Yes. I would say we have a lot of engagement with that here. People jump to answering questions with data and they’re quick. They know how to do that and they have very good ideas about how to make sure that the approaches they take are backed by data and backed by evidence. Very mature in that sense people.

Brian: Since you have this mix of these custom tools that you guys are building and how slick, how do you decide which wheel is going to get the most oil? You’ve got these custom tools, you’ve got some Tableau stuff, you’ve got people coming in, maybe they are using Tableau, but they don’t know how to build the reporting they need. Is it based on a business driver? If we get problem X wrong, this cost a lot of money, so we’re going to put our team on this problem and sorry, Jane, you’re going to have to take that Tableau tutorial and figure it out yourself. How do you resource like that?

Paul: As with any place, there’s certainly scarcity. Everybody wishes they had choice in people they had and twice them. Maybe even the computing resources and everything else that they wished they had. At a high level, a lot of it is driven by a strategic plan, by an idea for what we as an organization are trying to accomplish. That determines which things get the most people and the most priority. There’s actually a very mature culture here at Akamai of helping each other. Not necessarily taking on an enormous project if you don’t have the time for it, but opening your door and helping somebody solve a problem if you have expertise that can help them.

We find that it’s a balance of those things. We work on major roadmaps, large projects or tools for strategic and efficiency. Particularly efficiency reasons that we’re wishing to achieve as an organization. We spend a lot of us of the time helping the folks who need it, to get where they need to get.

Brian: That makes sense to me. Is the feedback loop in place such that there’s some point in the future which you look backwards on these projects, or products, or tools that you’ve built and say, “Did we make a dent? What were the success criteria for those? What’s that three month or six month rear-view look like?” Do you guys talk about what that is, so you know whether or not you hit your objectives? “And since project X got four times the resourcing, did we get four times the value or whatever the value was that was determined?”

I’m always curious about these feedback cycles because there’s a lot of places that they start with this telemetry and data, then they put technology on top of it, they build a bunch of software, and a lot of times the releases and the platforms are looked at as the outputs and the final part of this and it’s actually not. It’s the outcomes that come from the stuff we built that matter. If you don’t know what outcomes those look like, then you don’t know if you actually created anything meaningful.

So, I’m curious, that feedback cycle, does your business know? Like, “We have to see. We can’t get predictions wrong or we don’t want to have a little more than 12% server waste from the wrong prediction, whatever.” I don’t know what those metrics are. Can you talk about that feedback loop from a business and a value perspective?

Paul: Sure. Some of the things we’re doing are very tied to specific business goals for certain kinds of […]. These are targets for dollars saved in terms of operating the network at a lower cost. In those areas, we are very acutely being measured pretty much on a yearly basis along those lines.

We’re working towards getting better at what happens in between and the rest of the year. You can often go off-track a little bit somewhere in one month and that can cost you down the road. We’ve been focused on trying to get to more of a monthly evaluation where we can break things down, try to deliver a value on a monthly basis, then get feedback from customers, and also to see how they’re affecting the numbers in real world application of this data to actually optimize.

They never to learn. Are we consistently on track? Or are we moving in the right direction? I say that it’s definitely an element of what we do. Right now, we’re doing it more like every six months or a year. At a granular level, we’d like to move that to be a much shorter term and focus on constantly delivering smaller chunks of value.

Brian: That’s good to hear. My understanding from when we talked to that you be almost what I would call a product manager, even though you’re not developing commercial products but you’re overseeing the creation of these different tools.

I’m curious. Do you have the equivalent of a product manager role where one person’s job is to make sure that whatever analytics and/or custom tools you guys build for the network operations team or the team that deploys the servers, they live and breathe that world and they’re totally responsible to service those staff that work on those technical problems? Is that how it’s shaped or is everyone’s touching all of the different parts of Akamai?

I’m just wondering how you get into that world. What’s it like to be the server administrator and predicting where to deploy servers? How is that structured? Maybe you don’t have enough staff to break it down that way and I’m asking a leading question, but I’m curious if you could talk about that a little bit.

Paul: We actually do have four teams within our group and they are divided up with focus on the different stakeholder groups within the network’s organization. There is definitely some division. There’s also some who sort of cross responsibility but there are definitely folks who know specific subject matter areas very well and who are critical in those areas to anything more than the simple bug fix in an area is going to involve somebody managing that area.

Now, for our largest projects of all, we do have product managers as well as project managers involved in the creation of the larger ones. I’d say about two or three are major systems and the other several hundred tools or various pieces that we manage, care and feeding over the years. That stuff is either being taken cared of by one of these SME areas or it’s sort of rolling out to me especially if it’s something new.

A large part of my role is helping to at the outset to say, “Let’s define what this tool looks like. What it’s doing? Who is going to use it? What those people need? What are the processes at play here at Akamai that this is a part of? Do we understand those processes? Have we optimized those processes?” That’s a lot of what I end up doing with the rest of my team, to define those new products so that they’ll be the most successful as we build them and get off in the right direction.

Brian: That sounds awfully like design to me.

Paul: It is.

Brian: Is that traditionally how things have been done in this group or is this something that’s new? How’s that being received? Are you getting like, “Just give us the data and we’ll put it together,” and you’re like, “No. Help me understand what are you going to do with it at the end.” It’s just like, “Well, I’ll know when I see it.” Is it that kind of thing or are they like, “Great, let’s get it right.” What’s that process like?

Paul: The history of our group is that we have probably not put enough focus on planning and design, but I think it’s an area where people realize that we need to spend more. They really are now focused on that as a goal and understanding that it’s important in many context.

That’s not to say that there aren’t sometimes when people will say, “Here’s what I need and I need it tomorrow,” and you know that comes up. It’s a balancing act that is always a challenge, but I think there is an increasing sense and increasing support across the network’s organization and maybe beyond that using some sort of platform organization, other parts of engineering at Akamai.

It’s really a much better result if you make a plan upfront, you understand the context into which you’re creating this new thing, and you understand how it’s going to impact processes and flow that occur once you’ve built it.

Brian: Maybe you haven’t been there because I know you’re somewhat new in this position but if you’ve been there long enough to go through a full cycle with that where you’ve taken someone through like, “Let’s hold on. Let’s figure out what’s actually needed. What the real problem’s face is like,” and then you’ve gone all the way through maybe building a product or a prototype or something. Have you gone through a full cycle yet? Or are you still in the design phase on some of these?

Paul: For a couple of smaller projects, we’ve definitely done that. It’s been posted where people have come and said, “Hey, could you do X?” and we’ve said, “Well, we could do X but that actually requires more code and more effort. We have this other thing over here that actually can accomplish that and then it puts you more in the driver’s seat because you can help maintain it later. How’s that?” Often, the results are very positive. If we can actually get things implemented faster, people are happier in the end, it’s less maintenance for us overall in the long-haul.

So, yes on the small things. On the bigger things, those are in progress and we’re excited about those design phases that’s going on now. They’re larger and more productive than they’ve been in the past. We’re excited to see probably by the middle of this year or later in the year that there is also an output of that.

Brian: Can you tell us about what some of those activities are? I think some of the people listening to this are not coming from digital-native companies. The whole product design process is maybe foreign to them. Can you tell us about like, “What are you doing during this time? Why aren’t you writing code? You have the data. Put Tableau there and build some reports.” What are you doing that’s not that during this phase?

Paul: Usually, the first thing we’re doing is trying to find out who are all the people that interact with this data, or these kinds of systems, or these particular business objects, or aspects of Akamai’s network. Often at the start, we find there’s common problems. There’s people and other parts of the organizations who may already have a tool that allows them to do this. Now we also want to go and observe those users. We want to go find out are they satisfied with the tool and is the tool meeting their needs, which are actually two different questions.

Really seeing whether what they’re doing is a process that’s optimal and seeing whether we can create a solution to this new problem or borrow a solution to this new problem and change it in some way that helps everybody. That’s one of the interesting aspects of design here is that there are many groups that interact with the same data in so many different ways.

I think a lot of that design phase is about, “Hey, one of the tools out there, how do we integrate them so that they’re the least work for us? How do we make sure that we’re choosing a good solution and we’re actually meeting the user’s needs?” Probably the last part of that, especially in our group is, and not getting stuck on not meeting 100% of any single tool, because in some cases, you’ll get 80% of the use cases for five groups and you have to say, “Okay, that’s fine. For this other case, they’ll do it this way.”

That’s a lot what goes into the design process. Really just understanding what the users are looking for, how does that match up with stuff we already have, and then how do we integrate that use case into what we maintain, in a way that is streamlined and effective for them and also streamlined and effective for us.

Brian: When you talk about getting to know what they’re going to do with this information and how they want to use it, is that through them self-reporting through, like talking to you in a meeting? Is it through you observing them doing what they’re doing now without the tool?

Is this largely like, “Right now I can’t do any of this. I need this tool so I can enable this new thing that I currently don’t do,” or is it more like, “I have this long, convoluted process I have to do in order to achieve X. Can you help me build the tools so I can do it in less time?” One of those there is like a recipe for something already and you’re trying to optimize it and the other one is more like, “This is a new thing I’ve never been able to do but maybe I could with your help.”

Do you put it into those buckets and then if it’s the former, how do you figure it out? Is it observation or just them talking to you about how they’re going to use it? How do you figure that out?

Paul: There definitely are both of those scenarios come up. We often get requests about processes that already exist. At some point, there’s some tool in there already, sometimes it’s a highly manual process. In that scenario, one of the great assets of this particular group is that we have whole standards, documentation, and work co-optimization group here within network, which is a true treader to have. Usually, when that kind of problem comes up, the first thing we do is say, “Okay, let’s work with the [worker?]group and let’s get a really good map of what this process looks like end-to-end and let’s look at what the steps are, what tools are now, where the pain points are, and then once we have drawn this out so that we understand the context, let’s actually first look and see whether there’s any way we can optimize the process, because the last thing we want to do is to spend a lot of time implementing automation steps for a process that shouldn’t be that way in the first place.”

We look at that process and we say, “Okay, how do we simplify it? How can we bring automation to bear, to make the process more straightforward, take less time, take less human effort.” Then, we usually at that point, sit down and actually design the automation solution around that.

That’s one kind of problem and that process of workflow [analysis…] does involve what we call business process performers in each step. These are not the people who manage those areas. These are the people who are actually doing the work. We want to know what are they actually doing, we talk to them whenever we can, and we actually go [observe.] them because we can learn at least this much and probably more by watching what they’re doing and what they’re struggling with. That’s one side of it.

The less well-described problems, those are the ones where nobody knows yet. This is something brand new. There, I think we tend to sit down and try to understand what these users are trying to accomplish, what problems they had in the past that this addresses, because so often, something that’s new is really some way connected to something old. We did this before. It didn’t really work or we have a gap here, there’s something that we’re not doing as well as we should or we’re not doing at all, and how do we get that better?

A lot of it is about understanding what they’re looking for and I think the big element of that that’s key is breaking it down into manageable phases so we can deliver quickly and iterate quickly. The last thing you want to do is sit down and say, “Okay, we think we understand exactly what you need. Now, we’re going to go off for a year-and-a-half and build it.” That’s always a recipe for disaster.

So, what we want to do is sit down and say, “Let’s take the most important crux of what you’re trying to get at here. Let’s implement something in a few weeks or a month. Then, let’s sit down and get it in your hands, get your feet back on it, and then figure out the next piece.”

This doesn’t mean we can’t have a plan for like, “Here’s really roughly what we think the phases are going to be and how they’re going to be laid out. But let’s have these checkpoints along the way and let’s iterate based on what we actually are able to learn, what we actually to benefit from.” That’s what we found is the key to those kinds of new projects is the fast iteration cycle.

Brian: We’ve talked a little bit about the MVP approach, which is about doing that minimal amount of work, which may or may not be working code, but you did a minimum amount of stuff to figure out whether or not it’s meeting a need that your customer has. You’re going through some type of observation process to fuel the first thing, the first asset or output that you create. It’s fueled by some kind of observation or research upfront so that when you go up to bat and take the swing, there’s a better chance of at least to base hit and not a strikeout or something.

I fully support that type of effort instead of me going off, “We all have the data. We’ll send you back a kit and then you can put it together yourself. It will take a year, you’re going to dump everything into the data warehouse, and then you fall into the Gartner 85% of ‘Big Data Projects That Failed’ category, which nobody wants to be in that whole thing. I think that that’s really great you’re doing some of that.

Earlier, you said you have a lot of different products and you said two to three of them are large. I’m just curious. Large by number of users? What justifies putting a dedicated product manager on it and what’s the extra love that is received because you’re one of those two or three? Is it they have a dedicated designer and dedicated engineers? It is more research time? Tell us about your big ones.

Paul: I would say that the largest projects usually have someone who’s effectively an architect for the project, who may also be part of the development team. They usually have a development team. It’s usually several people. At least in an ideal world, three or four is probably typical for larger projects. Then there’s a project manager who is managing the project and also how that reports up into our overall program of initiatives for that organization. Usually, those projects, to get substantial research, are going to be priorities for the organization at some higher level.

The last [piece] probably the most important piece is that there’s a product owner, who may or may not be the architect, in some cases the architect plus feedback from the stakeholders is enough to make it work, but most of the time, it’s usually somebody who is also the project owner or the product manager who’s really responsible for shaping the design of that product.

For example, one of the big tools we’re working on has to do with increased virtualization that we’re rolling out within the Akamai network. This is a big project because it’s a company-wide initiative. We have somebody working on designing the interface and working to figure out how the interface to provisioning works within the context of all the processes we have here at Akamai.

Another example, one of our key analysis or databases for analytics and for planning. There, the ownership is essentially a data team who is responsible for this database, the universe of this data, and roughly how it’s visualized. That team has responsibility for that database for its schema, how we got that data, where it comes from, its cleanliness, but also for the visualization aspect of it, and then it’s now also inheriting this ‘how do we use Tableau as part of that ecosystem?’ Just to give you some idea how these projects are organized and then what the roles are.

Brian: Got it. Your large projects fall both into maybe a database that’s sitting behind Tableau as the interface and then you have another one, the server provisioning one, which sounds like a custom web-based application or something?

Paul: That’s right.

Brian: So then, for that one, to me that’s the decision support. The provisioning action would be the decision the human takes theoretically, upon some analytics or insight, that made them decide, “I need to push the button to deploy X servers in Y region or whatever it may be.” Is that decision support part of that custom product as well? Or is this a balance between two or three different Tableau instances that are behind different databases, and then you co-authored the provisioning tool and just do the action, you make the decision in that tool, but the insight about when and how and where to make the decision is not part of the tool? Or is that actually in that tool as well, where it’s like, “Hey we predict that you should do this,” or “Here are the stats. You come and make the decision on provisioning based on what’s in this tool.” 

How much is that wrapped together versus a series of different URLs you’re going to bounce through and piece together yourself with eyeball analysis?

Paul: There’s some separation of systems and we’re actually moving into a more integrated direction. For example, a lot of us begin with a customer demand. Either we determine or the customer gives this information that helps us determine that they need capacity in a certain area.

That drives the process but that also factors in to a lot of decision-making that goes on, right about exactly what gets deployed, where, and when. There’s elements of this that are integrated in a sense that the deployments that we’re planning to make to expand a network or to choose a network in some way, are inputs into this great big optimization model where you say, “Here’s what we know we think is coming, here’s what we know we think is going to happen, here’s the moves we’re planning to make when and where we will run out of capacity.

I think we’re moving towards a more integrated feedback model for that where less of the work has to be sort of connect the dots by a human being and more to saying, “Okay, all the systems have this data and if they can exchange it with each other, then we have all the data in the places we need it.”

Brian: You’re talking about this feedback cycle annually, then you might look back and say, “How well did we arrange for these optimizations? We planned for these predictive resource allocation or whatever it may be,” you look back and see how accurate that was by looking at the utilization rates or something?

Paul: Exactly. Is there a customer demand we failed to meet? On the flip side, were there servers sitting around underutilized?

Brian: Got it. When we talk about Akamai going out and deploying servers, are you talking about deploying physical hardware in a datacenter or are you just talking about provisioning up virtual servers on the cloud somewhere? I’m just curious because you guys are a network that sits on top of the internet. Does this involve lots of humans and you’re rolling out hardware and all that or are we really talking about virtual deployments?

Paul: Some of each, but one of Akamai’s hallmarks, actually, is the breadth of the network. We have some servers in pretty remote locations. These are physical servers. These are places in some cases where there isn’t a lot of good cloud providers or anything like that.

Brian: Johnny’s going to the Arctic to install some Dell servers.

Paul: That’s right. I’ll tell you there’s a datacenter in Antarctica and it’s possible we have a server there.

Brian: Someone’s got to go rewire it once in a while. Oh, we’re out of a storage. There’s still disk drives in that cloud up there. They might be flash, but they’re still a piece of hardware.

Paul: One of the things that really differentiates Akamai is that we have this extensive edge network which really is pretty unparalleled to the industry.

Brian: When there’s a report back then, do they look at the travel cost for Johnny going to the Arctic on an ice clipper or whatever it’s called, and then was it worth going there to deploy these servers?

Paul: Sure. Increasingly, that is the kind of analysis that we’re doing. [and] we manage the network according to some of that. When there’s servers that are sitting somewhere and just not getting used or they’re there but they were extremely expensive to put there, then maybe that’s not a place to cover in the future. But in some cases, it makes sense to keep our coverage really good even if in one area where we’re sacrificing a little bit of cost to keep the coverage up over all and that might be worth it.

Brian: Right. I’m curious. Now that you’ve been here awhile and through all these, do you have any stories or anecdotes about a particular user experience, a customer/internal user that found an approach that’s useful, or you’ve got some feedback or maybe it was negative, but you learned something not to do it again, or any type of anecdotes that you can think about that were insightful for you?

Paul: Yes. We had a number of tools that we use for manipulating all the business data around what’s deployed in our network. I would say that I guess the best [anchor??] I had about them is that we’ve found there are tools that are very commonly used because of their flexibility. But if you actually look at the tool itself and you look at the complexity of the tool, it’s not that complex. It’s the default way of using things and people have used it continually because it has always been the way of using it, when in fact, there’s nothing particularly special about it.

We’ve seen in certain circumstances where you give somebody a new tool that just works faster, it provides very similar interface, or you found some tweak to that workflow that really can save them tons and tons of time, and you just watch their eye pop out. They realized that you just probably saved them two hours a day. It’s interesting that that can happen in pockets and corners. There are many tools that have been built already to help with that but there are still plenty of opportunity for it.

Brian: That’s great. That’s one of the things I think I love about being a designer. A lot of times, the big picture rewards like, “Was this product valuable or profitable?” There’s these lagging indicators which take a while and they don’t have the same hit as those small wins which were like, “I just saved this guy two hours a day doing a task that has nothing to do with his skill set. It’s just labor. He’s not using his brain. He just has to download these logs, put them in Excel, run a lookup, and then blah-blah-blah. And now it’s just bam.”

I love that and that’s part of it for me, at least the joy of doing design work and stuff. I totally relate to the way you’re saying about helping someone. It is so much about helping people and you also feel like, “Man, I’m also helping the company because I’m helping this person use his brain to do much more important things than maybe he was doing with tool time, like downloading crap and uploading into a tool, sorting it, changing this, and blah-blah-blah.” Most of that is tool time. That’s not the, “Should we put more servers in Antarctica?” It’s not the thinking time and the valuable business time.

Paul: It’s one of the very fulfilling aspects of the job like this where you’re building tools for internal stakeholders. In many software industries, you build product but your users might not be accessible for you or hardly at all. “I see that they’re right down the hall.” It’s a great fulfillment I think in building something that meets a person’s need and having that feedback and knowing that [did.] and having the satisfaction of that.

Brian: Yeah, that’s awesome. This has been great. I’m curious. Do you have any closing advice for other product owners or data product leaders, analytics practitioners in your space, maybe about changing domain, you’re in a new domain? Any kind of insights looking back in this six months or however long that’s been that you’ve been there?

Paul: I would say above all, my advice would be take the time to plan. Nobody ever thinks they have the time to design or to plan. To some extent, you just have to say, “If we don’t do this, [you know] the thing we build is not going to be worth nearly a much as the thing we could build.” You’re much better off figuring out the right design for something before you build it. Even when you think you don’t have the time, ask your managers and then your management chain for that space you need to get that pipeline started the right way because once you actually design things, you’re going to find that the number of people you’re helping and the degree at which you’re helping them is much greater.

Brian: I can totally get behind that. That closing statement, I agree. First of all, you’re putting that anchor in place to do good things down the road. You’re probably reducing you’re technical debt and you’re maximizing your ability to change, especially when you’re doing small deployments. You’re probably going to need to change stuff, so a little bit of designing and planning upfront can do a lot for both the engineering part of it but also most importantly the customer experience getting that right. So, amen to that.

Paul: Maybe the last part of that, just to add, is sometimes we take for granted the job we’ve been at for a long time. We actually take for granted that we think we know what everybody needs already. Sometimes, actually, it’s a blessing when you come to something brand new, because you’re not to assume you already know what that person across the hall really needs. You say, “I’m going to ask that person because I have no idea.”

I would say these problems are the same everywhere. Whether you’re in a place, in a domain you’ve been for a while, there’s still going to be some aspect to that problem and you don’t understand what that person is living with. Pretend being the new guy for as long as you can. Go ask again and get to really understand what that person is experiencing because I know you’re going to able to meet the need much better.

Brian: Yeah, I think that’s great advice. You don’t have that bias from your own knowledge about the domain or your assumptions there and that’s just a good design technique in general is being able to compartmentalize. We all come to the table with bias but if you can try to put that aside. For me, a lot of times it’s like leaving with new stuff with clients. It’s explain it to me like a five year old and I tell my clients sometimes this like, “What does it mean to deploy a server? What is he literally going to do and how does he know when to push the button to go do that,” and sometimes they look at you like, “What do you mean? You don’t know what a server is?” It’s like, “Well, I know what a server is, but literally I want to see every step it takes to know to go put one there. Is the guy going to walk out there with a box and rack it up? Or is this a virtual thing? Literally tell me what that’s like, that whole process.”

Even though I know something about how that works, you’re going with that clean slate because you want to be open to those things you don’t know to ask about, and that the more you can come in with removing as much of that bias is possible, you might find those nuggets and stuff that just pop out to you that the customer doesn’t know to tell you about, but that they’re just going through their process. They’ll often ping you. You have these moments where you’ll learned something you didn’t go in there to ask about and sometimes it can be a really big thing like, “Wow. That’s really what the gap here is. It’s not this. It’s this another thing.” Having that really childlike innocence about the way you inquire can help enable that.

Paul: Absolutely.

Brian: Where can people find out about you? LinkedIn? Twitter? Are you out there in the internet?

Paul: I’m on LinkedIn, for sure. I’m on Twitter. I’m on Facebook.

Brian: Where are you on LinkedIn? What’s your Twitter handle? I’ll put the links in the notes, too.

Paul: I think I’m @pjmattal everywhere.

Brian: @pjmattal on Twitter. Okay, great. I’ll put your information up there. Thanks for coming on the show. It’s been great to hear about what you’re doing in Akamai and good luck as you guys charge forward.

Paul: All right. Thanks.

 

Other Episodes

Episode 0

January 03, 2019 01:00:44
Episode Cover

003 - Mark Madsen (Global Architecture Lead, Teradata Consulting) on the common interests of analytics software architecture and product design

In Episode #003, I talked to Mark Madsen of Teradata on the common interests of analytics software architecture and product design. Mark spent most...

Listen

Episode 0

December 17, 2019 00:50:37
Episode Cover

028 - Cole Nussbaumer Knaflic On Data Storytelling, DataViz, and Why Your Data May Not Be Inspiring Action

When it comes to telling stories with data, Cole Nussbaumer Knaflic is ahead of the curve. In October 2015, she wrote a best-selling book...

Listen

Episode 0

June 30, 2020 00:39:12
Episode Cover

042 - Why Machine Learning and Analytics Alone Can’t Drive Behavioral Change inside Police Departments with Allison Weil

“What happened in Minneapolis and Louisville and Chicago and countlessother cities across the United States is unconscionable (and to be clear, racist). But what...

Listen