My guest today is Gadi Oren, the VP of Product for LogicMonitor. Gadi is responsible for the company’s strategic vision and product initiatives. Previously, Gadi was the CEO and Co-Founder of ITculate, where he was responsible for developing world-class technology and product that created contextual monitoring by discovering and leveraging application topology. Gadi previously served as the CTO and Co-founder of Cloudscope and he has a management degree from Sloan MIT.
Today we are going to talk with Gadi about analytics in the context of monitoring applications. This was a fun chat as Gadi and I have both worked on several applications in this space, and it was great to hear how Gadi is habitually integrating customers into his product development process. You’re also going to hear Gadi’s interesting way of framing declarative analytics as casting “opinions,” which I thought was really interesting from a UX standpoint. We also discussed:
“The barrier of replacing software goes down. Bad software will go out and better software will come in. If it’s easier to use, you will actually win in the marketplace because of that. It’s not a secondary aspect.” – Gadi Oren
“…ultimately, [not talking to customers] is going to take you away from understanding what’s going on and you’ll be operating on interpolating from information you know instead of listening to the customer.” – Gadi Oren
“Providing the data or the evidence for the conclusion is a way not to black box everything. You’re providing the human with the relevant analysis and evidence that went into the conclusion and hope if that was modeled on their behavior, then you’re modeling the system around what they would have done. You’re basically just replacing human work with computer work.” — Brian O’Neill
“What I found in my career and experience with clients is that sometimes if they can’t get it perfect, they’re worried about doing anything at all. I like this idea of [software analytics] casting an opinion.” — Brian O’Neill
“LogicMonitor’s mission is to provide a monitoring solution that just works, that’s simple enough to just go in, install it quickly, and get coverage on everything you need so that you as a company can focus on what you really care about, which is your business.” — Gadi Oren
Brian: Alright, welcome back to Experiencing Data. I’m excited to have Gadi Oren on the line from LogicMonitor. How is it going Gadi?
Gadi: It’s going great. Thank you for having me.
Brian: Yeah. I’m happy to have you on the show to talk about not just monitoring, but you’ve done a lot of work on SaaS, analytics products in the monitoring space, software for IT departments in particular. Can you tell us a little bit about your background and what you’re doing at LogicMonitor these days?
Gadi: Too many years in different industries, I actually spent multiple industries starting from medical imaging. Let’s say in the recent 18 years mostly, some sort of monitoring solutions. I dabbled also a little bit with marketing data analytics. That was not a successful company but I might draw some examples from there. Right now, I’ve recently joined LogicMonitor for an acquisition. I was the founder and CEO of a company called ITculate here in Boston. That company was acquired in April by LogicMonitor and I’m now the VP of Product Management.
What LogicMonitor is doing is solving a fairly old problem that still remains, which is monitoring is really difficult. Many companies, as they go, they reach the point where they realize how important it is for them to monitor what’s going on in order to be successful. Then they realize that it’s such a complex domain that they need to develop expertise. It’s just all around difficult. LogicMonitor’s mission is to provide a monitoring solution that just works, that’s simple enough to just go in, install it quickly, and get coverage on everything you need, so that you as a company can focus on what you really care about, which is your business.
Brian: Obviously that’s a hard problem to solve and I’m curious for people that are listening to the show. I imagine a lot what this product is doing is looking for exceptions, looking for things that are out of bounds from what some assemblance of normal is, and then providing that insight back to the customer. Is that a fair evaluation?
Gadi: It’s a fair evaluation. There is obviously the question of what is normal, but in general, providing that there’s many ways to define what normal is, then the answer is yes. It’s the ability to give you visibility into what’s going on, first of all, to just see that things work in general and work okay, and then when something goes out of what you define normal, to notify you on that and help you with getting things better.
Brian: From your experience in this space, since in a lot of companies that are doing analytics, it may be difficult to define the boundaries of what normal is, such that you could do something like, “Oh, we’ve detected an abnormal trend in sales.” I can’t think of something off the top of my head but I like the idea that the focus of the product is on declaring a conclusion or driving an insight that’s probably derived from analytics that are happening in the background. I would put that in the camp of declarative analytics as opposed to exploratory where it’s like, “Here’s all these data. Now you go and find out some interesting signal in it.” Most customers and users don’t want the latter, they want the tool to go do that job.
Do you have any suggestions for how companies that maybe aren’t quite in a domain where it’s black and white, like a binary thing, like this core is either connected or not-if it’s not connected, that’s bad and if it is, it’s good-is there a way, a kind of approach putting guard rails on things or what normalcy is? Do you follow what I’m saying? How do you move into that declarative space?
Gadi: The answer is obviously, it depends. The problem is so difficult that you are even having a hard time defining the question. It is very, very difficult. By the way, you called it declarative, I like that. I actually call it in a different way that usually creates a lively discussion. I call it opinionated. Opinionated system. The reason is there’s some evolution, especially with regards to monitoring. But I think it’s the same for other type of systems that are analytic-based.
Ten, fifteen years ago, it was so difficult to just gather all the data, that being non-opinionated or nondeclarative for your definition, was pretty good, because people just needed the data and they bought the context themselves. But there’s been a lot of changes since that time. First is the availability of computing. But the other is also the need is much greater now for giving the opinion, giving the bottom line. The system needs to be opinionated and then it can be a variety of things. It’s really multiple types of algorithms that can be used here. A small set of that is what people define today as machine learning and AI. But actually the domain is much larger than just AI. It’s the ability to look at multiple signals and develop in a certain degree of confidence, a conclusion that is derived from those multiple signals.
To put it in the most generic way that I can, some of them can be discreet or binary, and some of them can be continuous. How do you look at all that stuff and say, “I think that this is what’s going on”? And even more than that, here is what we think is going on, here’s what you can do about it, or here are a few options for you to act on it. That is the ideal solution that we would like to have. Obviously, we have very, very little of that right now. I think it’s a journey that will take us years to get.
Brian: I love that idea of opinionated because I think it softens the expectation around the technology and it also reminds people that it’s never different than when your plumber comes to house and you’re like, “Well, the shower hot water isn’t quite as hot as the sink water.” Maybe he tighten some things, he turns on some faucets, and he gives you an opinion about what might be wrong without actually tearing apart the whole system. We would tend to trust that. He might say, “Well, what I really need to do is X.” Then you make a decision whether you want to pay for that or not. But it’s not like, “If you can’t give me a 100% decision, then I’m unsatisfied.” We accept that opinion.
What I found in my career and experience with clients is that sometimes if they can’t get it perfect, they’re worried about doing anything at all. I like this idea of casting an opinion. On that thought then, especially for example, you’re putting a data model in place or something like this is which is going to learn from the information, there may be insights gleaned from some type of computer-based analysis which may be unseen or unexpected by the business. That could be positive or negative. But there also might be some context of what normal or expected is from the end-users.
For example, I expect the range to be between 32 and 41 most of the time. I know sometimes it goes up and I have this feeling about X, Y and Z. They have something in their head, you go and do all these technology and it says, “Well, the normal range should be 14 and so we flag the 16 here,” and he’s like, “I don’t care about that. It’s not high enough for me to care.” How do you balance that sense that maybe an end-user has, like, “I track sales,” or, “I’m doing forecasting,” and they have all these experience in their head?
Gadi: A couple of things. It depends a little bit about the domain. In some situations where the end result is what’s really important, you can use black box-y type of things like newer networks or things like that, not always but most cases, they tend to be more like black box. It’s like, “Here’s the result. I can tell you why that happen. It’s based on all these training I did before.” In some situations, the result cannot be a black box. It needs to be explained. In those situations, you really need to give people an explanation on why things happen like they are.
Monitoring, in many cases, tends to be the latter which is, I want to see the signal and the signal core strength on what was the shape and I want to see how it looked yesterday. If it looks the same, maybe it’s okay. In certain situations, maybe if you have a database and it has a latency of half millisecond which is very small, and then this morning it moved to one millisecond, is that normal? It’s not normal but I don’t care about it because before it gets to five milliseconds, I don’t need to know about it.
In those situations, I don’t know if that’s what you refer to by cobwebs or things like that. While the system is learning and can automatically detect what’s abnormal, there is a range to what I care about. I’m going to put a threshold and say, “Only if you cause five millisecond and this is not normal behavior, then I want to see an alert.” Normal could be defined like the signal is two sigmas away from the same day last week. Something like that. There is a different level of approaches, both in terms of how consistent the data processing is and in what type of knobs you should provide to the user in order for the user to develop the right confidence level to use that solution.
Brian: I would agree with that. I think there’s a balance there. Actually when we talk about our screening call, you made a comment. It was a good quote. It was something like, “The cutting edge UI is English,” if I recall.
Gadi: Or any other language, yes.
Brian: Exactly, whatever your interface is. But I would agree with that sentiment that I think the customers and from user experience standpoint, deriving that conclusion first or opinion as you said, then backing out from there, and providing the data or the evidence for the conclusion is a way not to black box everything. You’re providing the human with the relevant analysis and evidence that went into the conclusion and hope if that was modeled on their behavior, then you’re modeling the system around what they would have done. You’re basically just replacing human work with computer work.
What I found over time was, some of these systems, with just watching customers, is they’re very curious about the beginning and if you can build that trust, they start to understand how to trust your opinions. They tend to not hold you responsible as much if an opinion is wrong because they know what went into the math and to the analytics and also what didn’t go into it, steps that they can fill the holes in themselves. Of course, this means you have to know your customer, you need to have some kind of interaction with them. Can you tell me about some of your customer interactions? You’d mention one of your KPIs for your team. Tell us about one of your KPIs for your team.
Gadi: Over the years, obviously, you develop professionally and you change the way you approach to do what you do. I’ve been doing different ways of product management. I was a CTO at some point. But my basic attitude is doing product management and building the product from that understanding of what it is we want to build. What I’ve realized over the years is that there is a couple of really important points. One is the more you talk to customers, the more you understand the problem you’re trying to work on. A couple of years into working on a certain problem, you get to a point where you’re so familiar with it that you can pretty much, without talking to customers, generate a lot of really good product for some time. The problem is that this might diverge at some point or you’re going to miss something important.
I think that talking to customers all the time is what grounds you to what’s going on. I’ve made my team of about 10 people. We are monitoring how many times they have interaction with customers. I’m going to chart it monthly. I started recently as one of the KPIs and I’m going to just check if certain part of the team is talking less to customers, then why is that okay or not. And then if you have a spike, some of these stopped talking to customers, then we’re going to have a discussion on why that happened because I think ultimately, it’s going to take you away from understanding what’s going on and you’ll be operating on interpolating from information you know instead of listening to the customer.
Brian: Is it safe to say your team is comprised of primarily other product managers on certain portions of the product and then see up some design user experience reporting to you?
Gadi: Yes. I have, let’s say, about 10 people. We’re hiring now all the time. The company’s growing very quickly. Let’s say around 10 people. Most of them are product managers and some are design people. We also have a variety of previous experience in the team, which is really something I liked. Some of them are from the industry and they have built-in knowledge. Some were in engineering before, which I think has also an interesting experience. One or two came from being sales engineers or sales, which has a different aspect of benefit to it. I don’t think I have someone that was a customer before, and if you can have that, that’s really advantageous. I might be able to do that at some point.
Brian: I think that’s great. Do you involve your engineers or your technical people, data scientists, whatever with any of these interviews that you do and your customer outreach?
Gadi: As much a possible, we will look at the multiple sides and a lot of engineers I work with right now are located in China. In terms of language, we might have sometimes barriers, but absolutely when possible, I know that when I transitioned from engineering to product management, the exposure to customers was very educational for me. Whenever I am able to expose people to customers, I take that opportunity.
Brian: You have an interesting position. Maybe this is super common, I don’t know, but you’ve started out in a technical capacity, you have an engineering background, you were CTO, and now you’re in product. I’m curious. As someone that’s looking at holistic product both a business and also some kind of experience you need to do, you need to facilitate in order to have a relevant business. Are there biases that you need to keep in check from your technical background where the engineer in you says, “I want to do X,” and you’re like, “No, no, no”? What are some of those things to watch out for to make sure that you’re focused on that customer experience and not how it’s implemented?
Gadi: The question of biases is a wide one. It’s not just about engineering. It’s bias in general. At some point, you obviously get excited about what you’re building and you see all the possibilities. “We can do something a little here, we could solve that problem or this problem,” and then you start developing a preference. It’s very natural. When you realize that this is the case, sometimes you try to just not have a bias, but you can’t. Everybody has one.
The problem is, how do you make sure that this bias does not impact when you talk to a customer? It’s very easy to have a customer tell you what you want to hear. Probably the easiest person to deceive is you if you don’t pay attention. This is one of those things.
With regards to engineering bias, it’s not very different than any other type of bias. Engineers and makers just really care about working on interesting things and new technologies. Sometimes, there’s a problem and I think the more advanced engineers start to think about, “How would I generalize that problem?” It might be a runaway process where they want to build more than required and that more may or may not be pointing at the right direction. That’s another type of bias. Again, definitely something to watch for.
Brian: I want to move on to some other topics only because I can totally spend an hour talking about how important it is to do customer research. I love that you’re doing that and I think the theme here is you’ve actually turned that into a KPI for your particular reports and the product management division at your company, which says that it’s important to develop that habit. I would totally champion that.
Gadi: It’s not that I would like it to be. I can tell you I would love it to be. Since you’re opening this, I’ll tell you what could be ideal. But it’s a lot to ask for so I’m not implementing that right now. I do check that people interact with customers and they have written down notes. Written down notes should not only be two lines. It should be telling something.
Ideally, somebody can transcribe what the meeting was, but that’s almost impossible. I try but it’s very difficult. What I would have loved to do that we don’t do right now because it’s a lot to ask for, I’d love to have people repeat in their heads and in the notes the meeting and try and extract problem statements.
In the past I’ve implemented that in some situations and it was successful. But you have to do it in a continuous fashion over a long time and then you see those problem statements. How many references do you have to every problem statement? It’s really giving you a good visibility into what’s going on. Now, asking that is difficult but clear notes is a good start.
Brian: Just to tack onto that, it can be very hard to listen attentively and to draft notes. When I’m facilitating research sessions with a client, is you’ll have one person facilitating and one person taking notes, and then you debrief at the end. Sometimes, it does mean it’s a two-on-one instead of a one-on-one. It doesn’t need to be perfect. You can get better at this over time. That’s one way to get a little bit of a higher quality data.
You can also just use something like an audio recorder on your phone instead of handwriting the notes. When that meeting’s over, you grab a phone booth or type room and just talk into a phone. Then you can just have the audio converted to text very simply and quickly with a machine. That way, you’ve got a nice dump of what the conclusions were from the sessions.
Traditionally in the usability field and the human factors field, they came out of science background, so they would write these very long reports. Typically, what happens is, guess what, nobody reads the reports. So, you have to watch out for that. We’re doing all the stuff but we’re not taking any action on the information there. I like to highlight real concept, however you go about doing that, but that’s great.
Can you talk to me a little bit about engagement with these data products? This is probably a little bit truer in companies that are deploying internal analytics, like non-digital native companies, non-product companies, but they’re having trouble with engagement. Customers aren’t using the services. Do you have any broad ideas on how we can increase engagement from your perspective? How do you make the tools more useful, more usable? What do you have to say about that?
Gadi: How do you make the tools more useful? I think it’s a somewhat related question to how do you make your products successful to begin with? I’m going to talk about the new concept other than incremental. If you learn something incremental, I’m assuming you have enough data to place your bets successfully. If you learn a fairly new concept, what I would usually recommend is don’t code it.
Try new simulations, Exceling, and modeling, whatever it is that you can to build it without building it. Prototype it and then have a few lead users. Those are users that are excited about this domain and they really care about solving that specific issue enough to work with you effectively. You need two, three, or four of those and you just start working with them.
As much as possible, use their data. In the data domain, when we’re doing analytics-related product, part of the user experience in the entire cycle. It’s not just, “Oh, the user interface is the biggest expense.” No. It’s how the data is getting into the system, how is it being acquired, how is it being processed, and how is it being used on the other side when it’s producing meaningful insights.
You can test a lot of that cycle without a product or with a very light sort of a product, prototype of the product. I recommend that as much as possible. If you’re going the right way, you will know very quickly and if you’re going the wrong way, also you will know quickly and you can either course-correct or eliminate completely the project and save a lot of time and money. That’s usually something that works really well for a new concept.
Now, for incremental, it’s slightly different. Usually, you can use a similar type of approach but you can code something that’s kind of a prototype into your product, show capability, and then usually, you would have a lot more customers that are willing to work with you because it’s a small increment. You can validate early. I guess that’s the bottom line here. Experiment, iterate, validate early.
Brian: How is it necessary to code? I don’t mean to use the word code, we’re talking about Excel or whatever it may be. It is necessary to get even into that level of technical implementation in order to do a prototype?
I love the idea of working with customer data because that removes some of the classic example I’ve experienced like financial products where I was working on a trading system portfolio management. You’d have a bunch of stock positions in a table and you’re trying to test the design of the table. You have funny prices for, “Why is Apple stock trading at $12? Oh my God, what is going on?” That has nothing to do with the study but you’ve now taken the user out of the-
Gadi: You will not get a meaningful […].
Brian: I love that but do you need to necessarily get into modeling and all this kind of stuff if, for example, the goal is to see, would that downstream user take action or not, based on what they’re seeing in the tool if you’re using a paper prototype or something like that? What would you do if it says it’s predicted to between 41 and 44, what would you do next? And you happen to know that that’s a sensitive range. Do you even need to have actual Excel or math happening behind the scenes?
Gadi: I can see where this question is coming from. Nine-tenths is just putting a […] and mock-up might really give people a good feeling about where you’re going with this. But I do think that in many cases, not working with real data and even customer data-customer data is not a must-is not going to give you the right answer.
I’ll explain when this can happen. There is the case that you mentioned, which I wasn’t even about to mention it, but it’s too late, is the data that you see doesn’t make sense, you’re emotionally detached. You’re not getting good responses from this person. Now, assuming the data is good and if it’s yours, you’ll even connect it much better to what you see. But certain type of problems, you cannot understand, you cannot get a meaningful answer if the data is not real.
I’ll give an example. Right now, we’re facing a very specific situation where LogicMonitor is actually now in the process of redoing the UI and fixing usability. I’m told that this is the fourth time we’re doing it and there is a very specific problem of how to do search. We’ve been going back and forth on how does the search results should really show up because the search results are coming back in multiple levels. There’s data with dependency. Results are coming from multiple levels of dependency and need to show up on the same screen in a way that the user can use it. We’ve got to the conclusion that the problem is hard enough to answer and we need to prototype maybe one, or even two or three types of result presentation and just show it to customers.
Obviously, we want to code as minimum as possible to do that, that this is something we’re going to do. We usually do it with just wireframes but in that specific situation, we are not only needing real data but we’re actually coding something very minimal, three times, to get the right answer.
Brian: I think the theme here is, whether it’s code or whatever material you’re using, there’s a theme of prototyping. I would add that in the spirit of a minimum viable product or what I would call a minimum valuable product-I like that better-is figuring out what is the minimum amount of design, which could include some technical implementation like a prototype, what’s the minimum amount that you need to put out in front of a customer to learn something? To figure it out if it’s on the right track? That’s really what it’s about.
In your case, maybe it does take actually building a light prototype, maybe you don’t actually query 30 data sources, and you just have one database with a bunch of seed data in it. You control the test, but at least it simulates the experience of pulling data from many places or something like that, then you can tweak the UI as you evaluate.
Gadi: I think it’s a bit of going specific. I think that, again, in many cases you can do wireframes and you’d be fine. But remember, if you’re trying to test a complete flow, you’re testing a flow which may include 10-15 steps of the user for the user interface, and if you can do that with just wireframes where every step that they did produces result to makes sense, and you don’t have to model it at the background, then that’s fine. It’s probably better.
But when we are talking about 10-15 steps, sometimes, the amount of effort that goes into the wireframe is big enough to consider a very light background Excel implementation. When it becomes comparable, if doing wireframe is 80% of doing a very light implementation where the background is Excel, or even 70%, then I’ll say, “Hey, let’s do a little bit more of an effort,” and then our ability to test opens up to a lot of other possibilities that are not rigid within that wireframe. So, something to think about.
Brian: I would agree if you can get a higher fidelity prototype like that, with the same amount of effort. Absolutely, you’re going to uncover probably exceptions. You’re going to uncover information and an evaluation with a customer that you didn’t probably asked about. There’s so much stuff going on and there’s so much more information to be gleaned from that.
The main thing is not falling in love with it too early and not overinvesting in it, such that you’re not willing to really make any change to it going forward. I find that’s the challenge with especially with data products. I’m sure you’ve experienced, there’s a tremendous amount of investment, sometimes, just to get to the point where there’s a search box and there’s data coming back. At the point, you can fool yourself and say, “Oh, we’re doing design iterations,” but in reality, no one really wants to go back and change the plumbing at that point because it’s so hard just to get to that first thing.
I think the goal is to not build too much and be aware of that bias to not want to go back and rework what maybe a difficult, “Oh, it’s just a search box.” It’s like, “Yes.” But if no one can get from A to B, then the entire value of the product is moot and it sounds like, “Oh, it’s just a search box.” There’s a lot of stuff going on with getting them from A to B in the right way in that particular case.
Gadi: I totally agree. I’ve seen a lot of managers do that mistake. I bet that I did that mistake once or twice in my career. “This is awesome. It looks great. Package it and let’s ship it.” That is a big mistake because then people are telling you, “No, no, no. This is just a proof of concept.” Managers sometimes cannot understand the difference, so it’s a communication problem, it’s setting expectations, and you’re right. Sometimes, the way to avoid it is just not getting to the […] at all and I agree if you can.
Brian: Traditionally, from my work and the domain that you’re in, the traditional enterprise tools is that quite frankly, they can suck. The tolerances for quality was quite low, and I think that’s been changing. It’s a slow growth that the expectation that these tools can be hard to use, they’re supposed to be really complicated, and they’re for the very technical user, that’s changing. The customer and end-user is more aware of design.
I’m curious. Do you find that that expectation is going up? And do you find that new technology is making it easier to provide a better experience? Or is that being negated by the fact that, in your particular domain, you’ve got cloud and on-premise? I can see the challenge is going up. Just as well as some of the tools might get better, the challenge might get harder, too. Is it net out? No change? What are your thoughts?
Gadi: No. It’s not the opposite vectors that are actually pointing to the same direction, I think. What you’re saying is, I believe, no longer the case. I don’t know. Maybe in some old banks somewhere in Europe where they’re old-fashioned. I still heard that some banks in Germany are based on paper, no computers. That’s why I made this comment. But in my mind, this is long gone. It’s multiple trends of really pointing to the same direction.
First of all, people are educated by Apple that you can in fact have a product that’s pleasant to use. Some young people in their 20s and 30s, most of what they’ve seen is really a lot better that what you and I have seen, being slightly older than that. Expectation is to have good products. They’ve seen that hardware and software and combination of those things can be done well. That’s one.
The second is that the technology is evolving, especially in my space, there’s been virtualization, then there’s cloud, then there’s containerization, and so many big waves that are changing everything, that you constantly have to refresh your software and the ability. The users inside the enterprises are now replacing stuff much faster. They’re replacing the infrastructure much faster, and then with that, they replace software and adopted much faster.
The barrier of replacing software goes down. Bad software will go out and better software will come in. If it’s easier to use, you will actually win in the marketplace because of that. It’s not a secondary. It’s one of the things people care about. They don’t care about the user experience specifically. They care about being able to complete their tasks. They don’t care how that happen. If it was easier to achieve what they need and it left them with a good feeling, it’s a better tool. That derives better usability.
Everything is pointing to the same direction because you have to refresh as a vendor. You have to create software faster to adjust to the new waves of technology that’s coming in because that’s part of being competitive. While you’re at it, you have to take care of creating really strong usability because then you will have another advantage in the marketplace. I think those trends are only enforcing the same direction. You have to have great usability. By the way, usability is not limited to user interface. It’s everything. User interface is just a part of it.
Brian: I didn’t want to bias my question to you, but I would wholeheartedly agree that the tolerance levels for really difficult software or software that doesn’t really provide the value clearly or quickly, the tolerances for that have gone down quite a bit and I think you’re totally spot on that consumer products have created an expectation that it doesn’t need to be that complicated.
A lot of times, there’s a service to language like, “Oh, it’s ugly,” or customers will comment sometimes on the paint and the surface interface because they don’t necessarily have the language to explain why. It actually may be a utility problem or just a value problem. I think the importance here is, as you said, usability is important, but it’s not just about that. Ultimately, it’s about whether value is created.
So, if you write a decision support tool, like a declarative decision support tool in your case, it’s probably often about minimal time spent using the tool, maximum signal when I do have to use the tool, and the best case scenario is probably never needing to go into the tool to begin with. That’s actually the highest business value. You can focus all day on UI, maybe it’s a one-sentence text message is really the only interface that’s required and you might deliver a ton of value with just that.
Gadi: To add on what you have said, I totally agree. I actually see in the marketplace LogicMonitor is winning deals that are based on ultimately better usability. I can give you an example. A lot of customers that we see, companies are growing and they start monitoring using a few open source tools-there are so many of them-and when you’re small, like, “This is awesome. I’m going to use this open source tool and I have the problem solved.” The company grows and at some point, the open source tool, you realize that you spent so much time maintaining it and so much time on making sure that the tool keeps on working when you add another resource to the network, I think the old expression was, ‘Tool Time versus Value Time,’ or something like that.
Brian: Tool Time versus Goal Time.
Gadi: Goal Time, exactly. This is much bigger than user interface. This is about the whole experience, which means that in those open source tools, you need to have a team of five people that are chasing all the changes that happen in the organization and you’re never there. You never actually up-to-date with what’s going on. From a very high-level perspective, this thing just doesn’t work. It doesn’t work because of usability. So, we come in and as I’ve stated what we’re trying to do, we’re trying to do something that just works, what’s well and quickly, and you’re able to deploy quickly, automatically discover the changes, and follows what’s going on.
People are amazed by that and it’s part of the rationale, depending on the problems that they have. But in many cases, it’s part of what makes them buy our product. At the same time, part of our product had older user interface. I think that the comment that you said before, where you said, “Oh, it’s an ugly UI,” or something, I think there are situations where products that are a bit older might have pieces of user interface that are not as great, but their overall experience is so good that it carries the product forward.
I think in organizations that have a choice, they might actually opt for a product that, on a first glance, might look not as great from a UI perspective but overall their experience is good. I’m not saying that this is the situation with LogicMonitor because we actually have pretty good UI as well, but I think that our UI reached a point where it needs to be improved and that’s what we’re doing now.
Brian: May I ask a question on that just as we get towards the end here. If you’re able to share, what are the outcomes that you want to get from the new UI? There’s some business or customer impact you’re probably looking for, right? A business justification.
Gadi: There is. It’s fairly complicated because it’s also a very expensive process. There are very qualitative things that you start hearing like, “Oh, your user interface looks old,” or people tell you things like, “Your customer X is much easier to do a certain task.” I like that better because it’s a lot more specific and they can explain why and all that. But in many cases, you just get like, “Oh, this other company has a new UI and it’s so much more pretty and cool.” That’s very hard to measure and very hard to act on.
We have some of that but more specifically, I think, LogicMonitor’s also moving from the mid markets to more and more enterprise. As that happens, certain things that used to be okay are no longer okay. The amount of data that we’re dealing with on the screen, how we process and present it, when you have a couple of hundreds of items, you can think about a tree or a table. When you have hundreds of thousands, then the entire thinking process is different and you need a complete different method.
Doesn’t mean all those changes like trends we’re moving upmarket in terms of size, we expect a lot more data in the user interface. People are telling us that the UI looks a little bit old. We want to refresh also the technology. We can do other things. If you’re doing things that are mostly server-based, then UI tends to be more static. It doesn’t have to be that way but it’s an engineering challenge.
But if you’re moving to the more new frameworks like React, Redux or things like that, you can do a lot more dynamic. Every component can take care of itself, its data, its model, and update asynchronously. It opens up the product to do things that are a lot more responsive, like a one-page application, for example. A large part of the light business logic is actually done on the client side rather than on the server side, so it makes a much better user experience. All those multiple causes, multiple trends that lead us to the conclusion that we need to refresh.
Brian: Was there a particular business outcome, though? For example, are you having some attrition and you’re looking to stop that? Or do you think this is the way to start facilitating sales, to close more easily with a better UI or anything like that? Or is it mostly qualitative?
Gadi: No. Obviously, we try and quantitatively justify stuff, so we look at all the requests from the last two years and how many of them are related to UI and certain things in the UI that are very hard to do today. Yes, we do think that this will encourage sales for certain reasons that we will improve in the UI.
I think that over the years, because there are so many people changing things in the product, I think that some of the consistency have dissolved along the way. In most cases, you do things the same way but in other cases, you do it a little bit differently. That is both in concept and the UI. That’s confusing for new users. Old users don’t care. They’ve got used to it but I think there’s some issues of consistency.
Back to your question, we do expect that to increase sales. We expect that to increase customer satisfaction. We’re actually improving a lot of the flows that we went through and we realized that simple things are missing in the UI. Those are the gold nuggets that you find on the way. Really simple things that you could add or modify in certain places that would make flows a lot better. And I mean reducing 5-10 clicks in a certain flow.
My favorite one is I look at a user, he ends up working on a product, and they have six or seven open tabs. I was like, “Why do you have so many tabs?” and he explains, “It makes total sense.” It shows that you’re missing something in the product. There’s a couple of things that are easy telltales if multiple tabs are open, or you have a sticky note on the side with text, or you have Excel on the side where people copy-paste. All those things are signs to problems with the product. We have a few of those and our product is going to come up the other side much more pleasant for the users and help them achieve things faster.
Brian: Great. I wish you good fortune and good luck with that redesign that you guys are going through at LogicMonitor. On that note, where can people find LogicMonitor and where can they find you if they wanted to follow you?
Gadi: You can find me on Twitter. The handle is @gadioren. I have a LinkedIn page. You can look me up and find me there. You can get to our website, it’s www.logicmonitor.com and that will get you started in you’re interested with that.
Brian: Awesome. Thanks, Gadi. This has been really fun to talk to you and hear about your experience here. Thanks for coming on Experiencing Data.
Gadi: Thank you very much for having me.
Every now and then, I like to insert a music-and-data episode into the show since hey, I’m a musician, and I’m the host
Simon Buckingham Shum is Professor of Learning Informatics at Australia’s University of Technology Sydney (UTS) and Director of the Connected Intelligence Centre (CIC)—an innovation...
Tom Davenport has literally written the book on analytics. Actually, several of them, to be precise. Over the course of his career, Tom has...