058 - IoT Spotlight: 8 UI / UX Strategies for Designing Indispensable Monitoring Applications

February 09, 2021 00:35:04
058 - IoT Spotlight: 8 UI / UX Strategies for Designing Indispensable Monitoring Applications
Experiencing Data with Brian T. O'Neill
058 - IoT Spotlight: 8 UI / UX Strategies for Designing Indispensable Monitoring Applications

Feb 09 2021 | 00:35:04

/

Show Notes

On this solo episode of Experiencing Data, I discussed eight design strategies that will help your data product team create immensely valuable IOT monitoring applications.

Whether your team is creating a system for predictive maintenance, forecasting, or root-cause analysis – analytics are often a big part of helping users make sense of the huge volumes of telemetry and data an IOT system can generate. Often times, product or technical teams see the game as, “how do we display all the telemetry from the system in a way the user can understand?” The problem with this approach is that it is completely decoupled from the business objectives the customers likely have-and it is a recipe for a very hard-to-use application.

The reality is that a successful application may require little to no human interaction at all-that may actually be the biggest value of all that you can create for your customer: showing up only when necessary, with just the right insight.

So, let’s dive into some design considerations for these analytical monitoring applications, dashboards, and experiences.

In total, I covered:

Quotes from Today’s Episode

Consider your data product across multiple applications, screens, departments and people. Be aware that the experience may go beyond the walls of the application sitting in front of you. – Brian (5:58)

When it comes to building forecast or predictive applications, a model’s accuracy frequently comes second to the interpretability of the model. Because if you don’t have transparency in the UX, then you don’t have trust. And if you don’t have trust, then no one pays attention. If no one pays attention, then none of the data science work you did matters. – Brian (7:15)

Well-designed applications understand the real world. They know about things like seasonality and what normalcy means in the environment in which this application exists. These applications learn and take into consideration new information as it comes in. (11:03)

The greatest IoT UIs and UXs may be the ones where you rarely have to use the service to begin with. These services give you alerts and notifications at the right time with the right amount of information along with actionable next steps. – Brian (20:00)

With tons of IoT telemetry comes a lot of discussion of stats and metrics that are visualized on charts and tables. But at the end of the day, your customer probably may not really care about the objects themselves. Ultimately, the devices being monitored are there to provide business value to your customer. Working backwards from the business value perspective helps guide solid UX design choices. – Brian (23:18)

Links Referenced

Transcript

Hi, it’s Brian T. O’Neill here again, and I’m back with Experiencing Data and an episode featuring me again, [laugh]. This is going to be another solo episode. And I wanted to share my ideas around IoT. We haven’t talked about IoT too much on the show. But this is actually more about monitoring applications, and designing really useful tools for monitoring health of systems, whether or not there’s physical IoT objects or software objects, or whatever it may be, I thought I would give you some strategies that you could take back to your work, to the applications, or products that you create in the space to try to make them more valuable for customers.

So I’ve titled this episode, “IoT Spotlight: Eight UI and UX Strategies for Designing Indispensable Monitoring Applications.” So, what do I mean by that? I’m talking about, again, health monitoring, predictive maintenance, predictive utilization, you know, services that are intended to provide business continuity. So, any type of system where you’re monitoring activity and the goal is basically to keep a normal operating state and business state at all times. Also, tools that do things like root cause analysis, so this could be for cybersecurity or analyzing network traffic, this kind of thing.

So, I’ve done a fair amount of work in this space, both for data center products-software tools used in the data center to monitor traffic and analytics on your I/O and all that, as well as-I mean, even stuff in the refrigeration space and looking at software and hardware systems used inside places like grocery stores and food transport, and that, and looking at predictive maintenance in that space. So, I thought I would share some of the insights from my experience working with clients in this space that I think can be extrapolated for a lot of different industries. So, without further ado, I’ll jump into these eight items for you, and hopefully, you can take them back to your work.

The first one is to consider that your data products user experience may happen across multiple applications, screens, interfaces, departments, people, et cetera. Sometimes there’s a thing called CX, or sometimes it’s called service design; you might hear these words. I think part of this is really about understanding that the customer experience or the user experience may go beyond a single application and this is especially true if say, you have people whose job it is to monitor the health of your IoT devices, and so part of the time they’re online doing things such as looking at this application. Other times, they might be out servicing a physical piece of hardware or something like that. So, I want you to be thinking outside of the scope of just a specific software application that you might be doing the data science part on, or the predictive modeling, or whatever it may be, the user experience may need to look beyond that.

So, if you work at a company that produces physical products that are data-enabled somehow, they have some type of telemetry connected to the web, or even a local network or something like that, and your goal is to have some type of predictive maintenance capability on the devices so that you can minimize business disruption to your customer when you think about the user experience here, and all the people that need to interact with the service, you may be designing something that involves the end-users of the devices themselves; the people who manage the hardware, or the devices, or the objects, or whatever it is that the system is monitoring; you have the managers of the people who are the main users of the software, again the monitoring software, et cetera. And the managers may be the purchasers if you’re working with commercial software, so they have a business interest in this even if they’re not the ones who perhaps do the daily monitoring stuff. You may have employees at your company who may service the devices, so technicians or something like that, they may need telemetry as well, or even a third-party service provider may be doing that type of work. You may have account managers and sales reps who want to understand-again, if this is a commercial offering, or something like that, or even if it’s not, those people may have an interest in knowing what’s going on in the client site and being kept in the loop about what’s happening with the accounts that they manage.

And then you have like CSR and support reps. The people who might field inbound calls, or if a customer doesn’t know how to read a chart or doesn’t know what the telemetry is saying, or they don’t think it’s collecting the data properly. So, the point here is that all of these different user types may have different interests and different needs, different tasks and activities that they do. So again, we need to think beyond a singular user here and understand that there’s groups of people, potentially, working together, or maybe pockets or groups like your employees and your team, and then the customer has their employees and their team, et cetera.

So, be aware of that. Make sure you’re designing with all those people in mind, or I should say, the people who matter most. Not everyone necessarily needs a separate user experience, but you should be aware that there could be other people in the loop here with distinct needs. So that’s the first one: consider your data product across multiple applications, screens, departments, people’s interface. Be aware, the experience may go beyond the walls of your application sitting in front of you. Okay?

Next one is when building forecasting or predictive applications, you want to allow customers to change the parameters, explore what-if scenarios, and really begin to understand the relationships that may be present, if possible. I know with some forms of machine learning, it can be very difficult if there’s a lot of different variables contributing to the way forecasts are made, but this can be really helpful, too, if you’re showing something to senior management, rolling up these recommendations from your machine learning efforts into a small little prototype or application that can be played with can be really powerful in communicating the value of the work that you’ve done, it probably goes to say that ex-AI, or explainable AI, or model interpretability is going to be really important here, especially if the audience is perhaps not used to… they’re not used to seeing that Feature A or Variable A actually has a lot to do with the predictive power of the model because no one’s ever known that before.

So I’m hearing more and more that model accuracy, it’s second priority to interpretability of the model. Because if you don’t have that, you don’t have the trust. And if you don’t have the trust, then no one pays attention. If no one pays attention, then none of the data science part matters. So, most of you probably know that, but just keep it in mind.

Second one, here talking, again, about forecasting and letting people play with the parameters here. A good-if you need a visual example of this, this is something like a retirement planning calculator or tool. So, these run forecasts of your portfolio under certain conditions, certain stock market conditions, and your rates of savings, and all this kind of stuff. And so while they may not use machine learning or whatever, the customers don’t care about that. They’re trying to set up themselves for retirement.

So, the reason I use this is that it’s a good example that we can all probably relate to in terms of thinking about what are the variables that we allow the user to set? Savings rates, which accounts should be included? Which accounts are being drawn down on versus just, they’re fully allocated for retirement purposes? What about a lump sum payment? You’re expecting an inheritance or something like this, and so you want to be able to plan for some of these variables in the forecasts that are given. So, you need to kind of do the same work to sit down and really understand. Your CEO or your executive team probably isn’t planning for retirement, but they are doing some type of work, and if you’re going to build a tool like this, you need to be aware of what are the variables that they want to potentially lock-in or control for if they’re going to be playing around with your forecasts and that kind of thing. So that’s that one.

And then the third kind of sub-bullet on this is to try to help the customer visualize how far apart the map is from the territory. Territory is reality; a map is just a model of the territory. The map is not reality. So, when they’re using this application or tool, how far apart is the map-your application, your forecasting application-how far away is that from the territory, the actual reality? So, one way this might manifest itself in the design is if a user is making-you know, if there’s going to be a human decision made based on these tools, is there anything excluded in the modeling that may be worth explicitly telling the customer about?

So, this is not just about showing the features that were used to model the predictions, it’s also about what was not used and being clear about that if there is an expectation that they might. So, a simple-this is a really lame example, but a simple one with, like, forecasting something to do with money over time would be not factoring in inflation. So, if for some reason you are unable to factor in inflation-which probably would not be the case, but if that was for some reason technically very difficult-and it was excluded, you might know that, hey, this is actually a really important aspect if we’re going to be projecting something out over 20 years. We need to go beyond putting that in a little tiny, gray footnote, legal text at the bottom of the application-which is more about risk and compliance and nothing to do with user experience-we might want to surface that and make sure that that’s very clear to the customer when they’re making a decision to a user that this has not been factored in.

The map is always incomplete, but it may be really missing something that they might just assume is there. So that’s also part of the design choices. It might just amount to a blob of text or a little alert or something like that, but that is also part of the user experience.

So, number three, well-designed applications understand the real world. So, what do I mean by that? Well, I think they know about things like seasonality. They know what normalcy means in the world that this application is monitoring. They learn and take into consideration new information as it comes in.

So, the goal here is not to just display the statistics from the hardware and the telemetry, and expect users to come up with meaningful comparisons, and insights, and action items. Nobody wants to come and play with your metrics toilet. That’s not what they’re there for.

So, how can you model what normal operation means? What is the definition of unusual operation? What is the definition of unusual operation that is actually worth flagging the user’s attention in the actual interface, like a warning, an alert, a cause for concern, abnormality, stuff like this? Modeling these things out and understanding either the hardware itself or what the customer’s tolerance levels are for certain things, this can help us design a better experience because the system has knowledge about the human element of this data.

Knowing that 30 is really bad and 40 is terrible on a scale of 1 to 100. And just looks like a 30 to-the software is dumb; it doesn’t know anything, we want to teach it that, no, 30 is a really significant number. And when this chart or whatever, this metric goes beyond that, that’s a really important thing to know. And I sometimes see teams kind of run from this because everyone’s different, and we don’t want to be telling them what it should be and all this kind of stuff, when in reality, the customers may already have an operating model in mind, they may have a sense of what normalcy is and they’re already treating the world without your system.

They already monitor and manage the world in that way. That is their mental model of things. And so you may want to go with that, or at least begin with that. And maybe if you need to adjust their expectations if they’re actually not looking at it the right way, you may need to go with it a little bit, and then adjust it over time.

And that gets my other point, which is, again, ideally, these ranges need to adjust to seasonality. And that could be literal seasonality, like the weather or something outside, or business seasonality, like there’s more use of the system during the summer, or during the accounting season or whatever, we expect to see a lot more activity in these systems and we want the system to be smart enough to understand that so it doesn’t throw out false flags to us about what’s going on.

And finally, why do some of these ranges, these qualitative ranges for the numbers matter? Well, understanding the real world, the ranges, and the numbers that the human users of these tools are already kind of keeping in their head; they help us draw better charts. And I’m talking particularly like things like time-series charts, like so what do I mean? Well, a simple example would be, how do you dynamically determine how to render the Y-axis minimum and maximum values when you’re dealing with dynamic data? So yes, you could just say, well, we’ll take the min and the max, and we’ll pad it by 20 percent, and then those will become the values that we print on every single chart.

Well, the problem with that is, let’s say that normal use is, like, between 30 and 100. On a scale of 0 to 100, the typical range would be 30 to 100. Well, for some reason, something’s not working, and so you end up printing a range of 0 to 1. So literally, like 1 percent of the total range is now what the plot is so that the chart itself is taking up the same amount of visual space, but the bottom axis is 0 and the top of the Y-axis is 1 and you’ve got this wild plot going zigzagging all over the place.

And in reality, there’s no story. It’s like, go home, nothing to see here. But your chart visually suggests that there’s all kinds of activity going on here when in reality, there’s just nothing to see there because the chart was rendered dynamically and did not consider the real world. So, you might want to pin those charts to a 0 to 100 range, or last month’s average, or I don’t know what it is, but the point is, just simply taking the min-max values and using those may not be the best choice, and you may end up creating noise in the interface.

So that’s part of the reason why these what I call qualitative ranges really matter and actually help us render a better story. So, in that case, I might want to just to see basically a flatline-a chart that’s empty. Why? The thing is offline. There’s nothing to see there. Maybe it’s got a pulse, but it’s basically dead. I don’t want to see a bunch of visual noise that suggests, hey, why is this thing going crazy, when it’s not going crazy. Right? Okay.

Number four, when dealing with things like service and maintenance, the best user experiences around these devices integrate the people, the processes, and the technology. So, a simple example here might be a system that monitors the health of a bunch of similar objects, all of which are actually interconnected, and they all actually have some kind of impact on each other. So, what happens if one of these objects is taken out of service? This may have implications for your customer’s business, as well as for the actual technical system and the way the monitoring application reacts.

So, if one device is taken out of service, what happens to the other devices, and how does the software react to a situation like that? Does it understand the difference between being out of service and being broken or disconnected? Like, can a customer go in and say, “I’m actually removing this from service.” Instead of just all of a sudden, the telemetry disappears and the system starts generating alerts and notifications, and, “Oh, my God, something’s wrong,” when in reality, it’s a planned outage. So, this means we have to factor in how will alerting and notification work when the environment changes.

And the last kind of sub-bullet on this is how to multiple users on the customer side coordinate a change to the system? A good example here would be something like integration with a ticketing system. So, widget number 1A out of, you know, the 100 widgets that are out there is being taken out of service. Well, it might be good to know that since there’s actually five technicians that manage these 100 widgets, maybe you integrate that ticketing system or something so that when User B logs in, they can see User A has already-aware of the situation with this object, has taken it out of service; there’s some kind of log or record there.

And again, this gets into some product scope things like, what’s the real value we provide? And is this becoming a ticketing system and all that? It’s so much easier to integrate software and tools and share data with API’s and stuff that I think it’s really worth exploring that or at least creating bridges. I’ve talked about this show, like, avoiding creating these islands with no bridges and no transportation. You don’t want to create an island, which is, you have this kick-ass experience or vacation hub, but no one ever thought to build a port, or a bridge, or any-an airport. There’s no way to get to it.

So, if you can figure out a way to, say, link off to a ticketing system that logically links to this particular item that’s being taken out of service, you’re really helping out with the overall user experience. And you’re not letting the walls of your software application necessarily define the real-world user experience since, again, the real world is different. And the real world spans people, technology, application software, all those kinds of things.

All right, number five, the greatest IoT UI and UX may be one where you never have to use the service to begin with. So, what do I mean here? I mean some services may deliver their best value when you just don’t have to log in almost ever unless you’re curious. And you just get really helpful, powerful alerts and notifications at the right time with the right amount of information with some kind of actionable next steps in them. This really gets to the point that don’t underestimate old technologies like email and messaging as being critical, not as this kind of like feature add-on.

And sometimes I see this as like, “Well, that’s a feature we’re going to add to the product,” instead of it being, “Actually no, this is-we can’t look at email, just because it’s technically a different feature. It’s not part of the application user interface. It’s inherently part of the product and the product strategy and the user experience. We should not be looking at it as a bolt-on feature, it could be integral.” So, notification strategy, really, really important.

You may end up building something where like the dashboards rarely even get looked at because a customer gets an event notification and that links to like an event detail screen or something like that, and they don’t ever really look at the dashboard because they manage the whole thing through their inbox, or Slack or whatever the heck it may be. If there’s not many of these, that might be a totally fine thing. Maybe they pop by the dashboard on their way out to make sure there’s nothing else that they missed, but that could be a great experience right there. So don’t assume that everyone’s going to come through the front door: they may be knocking on your back door, they may come in from the pool house. You got to think about these different entry points when you’re designing this.

So notifications, really clear; make sure there’s-I usually recommend that they have some type of supporting evidence in it. Like, you don’t want to send out a little scare bomb like, “System offline!” Or whatever, “Object whatever offline!” It’s like, “Why is it offline and what do I need to do about it?”

Well, maybe you could pack a little bit more information density into that email, like, “Taken offline by User X,” or, “It’s normally offline at this time,” or some kind of supporting data evidence to provide some context for that so that the user knows whether or not do I need to go get off my couch and deal with this right now or is this something I could maybe leave for tomorrow. The final thing I’m just going to close out on notifications is you got to watch out for alert noise. Consider batching alerts and notifications, so that you’re not piling them on. And this is especially true when the objects being monitored are interconnected, you can end up really creating a lot of noise for the customer, and now they just have a different problem, which is, which alerts should I pay attention to?

And at some point, they just start to ignore it. And they’re like, I mean, I’ve literally heard users tell me this, “I just wait for the phone to ring because there’s so many alerts and devices sending me telemetry and crap all the time. I wait until someone calls me and they’re mad. Because I can’t possibly keep up with all this.” Do not contribute to that noise.

I would say err on the side of less notification. Don’t go crazy with too many preferences and parameters, but if you only send out stuff, when it’s really important, people will pay attention to that. But if your system is designed by default to generate a ton of noise out of the box, they’re just going to tune you out, and you’ve kind of lost the battle there.

Okay, the next one here: with tons of IoT telemetry comes a lot of discussion of stats and metrics and letting people look at everything on charts and tables. So, at the end of the day, your customer probably doesn’t really care about the objects or the hardware or the things that are being maintained. The direct end-user might because that might be their job is simply to maintain all this equipment and make sure that it’s running correctly and all that. But ultimately, the devices and the things being monitored are probably there to provide some type of business outcome or value to your customer, to your user. So, this idea of business continuity and disruption, or whatever it is that they’re really interested in, the downstream value provided by the system.

Let’s say it’s thermostats controlling buildings. They really care about their cost spend, energy use, and all of that. They don’t really care about the thermostats themselves and the fans and the AC equipment. They care about the spend, their green footprint, and are we wasting money, and all this kind of stuff. That’s really what it’s about.

And that’s this should be part of the user experience, it should be part of the strategy and the way we think about it. So, focusing on this business value is also not just good for your user, it can be good for you. And this is really true for commercial software people. So, if you’re running a commercial software product in this space, a simple example can be can you quantify the business impact that the system is having? How many dollars and cents are you saving every day, or how many incidents did you prevent from happening? And is there a way to quantify what the business impact may have been from those?

And maybe there’s a little part of the dashboard that kind of just sits quietly in the corner, but it’s keeping track of this stuff, and it’s kind of gently reminding the customer, what is the value I get for this? Oh, yeah, like this thing is literally helping save me money. It’s saving on labor, it’s keeping business continuity going, and all of this. So, make sure you don’t get too lost in just the objects and the data, but also be thinking about the business impact.

Number seven, here it is two more. If your system enables troubleshooting, do not underestimate the value of letting customers enter in custom events or text descriptions, particularly on time-based charts. So, what do I mean by that? Again, map and territory aren’t the same thing. Territory is reality; map is your application, it’s not going to have the entire real world in it.

However, a simple example of this might be, we relocated equipment, so we had to disconnect it, reattach it to the network somehow, and now it’s back. Well, it might be good to let the user actually put in an event, type in an event and say, “Equipment moved on January 7, 2021. We moved this equipment.” And seeing this on the time-series charts, so when we see an abnormal drop there in the telemetry that would normally be coming in, we have some context for what happened in the real world. So, super basic feature from an engineering standpoint. It doesn’t really have anything to do with the data science or the predictive capability or any of that, but it could be very powerful in the user experience because we now have some additional real-world context built into the software experiences.

So, I imagine this could be taken further where maybe you could actually use-you provide a dropdown, like what is the event, it’s an out of service event, it’s a fault event, or whatever, and you let the user manually include these, and then maybe the models actually, logically include something about that data and the way they react. That’s possible, too, but I would say not to get too carried away initially, and just consider the value of letting people add notes have their own and especially dates when we’re talking about time-series charts and things like that, since a lot of times with IoT stuff, we’re looking at time-series data.

And then the last one here, primary dashboards and these types of tools probably need to focus on doing a handful of things very well. So, what are those things? First is, you probably need a small snapshot of the landscape, so this is the overall health of the entire system, potentially with some meaningful comparisons so that this gets back to, “As compared to what?” Whenever we show data, or KPIs, or things like this, typically speaking, we want to have some type of comparison. So, whether that’s comparing it over time, comparing it to siblings, comparing it to a target. You know, maybe there’s a service level agreement and there’s a way to quantify that. Whatever it is, the point is, usually you’re going to need some comparison there. So, first one there, again, small snapshot of the landscape.

This is probably not the major thing, though. The major thing I would say on most of these dashboards is going to be action items. What needs my attention right now? Ideally, probably ranked by business importance, but this could also be ranked by things like ownership, maybe I only managed odd-numbered devices or things that are in California, but I don’t touch the ones in Massachusetts. Whatever it may be, there’s different ways to think about that.

But this is especially true if there’s a large environment that’s being monitored, is being really clear about how do we rank these things in a smart way, so that the users really know what should I pay attention to right now? So, action items, again, could be addressing broken or suspicious activity accepting a recommendation which actually then changes the environment. And so if you run with that model further, this could be something where it’s like, “We recommend that you auto-power down these devices between this time period, would you like us to do that?” And you say-click the button, “Yes.” Well, you would want your models to also be smarter enough to adjust to that parameter and say, okay, well, these devices will now be off at this time, we need to make sure that the models don’t flag that as being some kind of an unusual event. So, constant learning here, dynamic environments, be thinking about those things. Again, why does that matter to the user? To avoid noise. That’s what a lot of it comes back to. Okay.

And then another thing here would be, again, we talked about this business value, but seeing if there’s a way to roll up the business value that the customer has been getting from the system. So again, I wouldn’t go overboard with this, but salespeople will love it. [laugh]. The managers may also love it, but I literally have seen it where an end-user who’s not the buyer of a piece of software, but they’re the person that lives with it every day, they want to be able to prove to their boss who may be the fiscal sponsor, the one that makes the purchasing decisions, they may want to be able to advocate for it and say, “This is how much this is really helping me do my job. It’s saving us a ton of money. We should definitely renew our contract or whatever.”

There can be a lot of value if you can actually show that in there. And they may want to actually go in and look at a detailed report about the business value, so there could be an opportunity there as well. And then the final thing on these dashboards that I would be thinking about is not to underestimate the importance of letting users have something akin to a watch list. So, this is something like, “I just did maintenance on these objects, there should be working fine, but I still just want to keep an eye on it. I still want to just feel good that for the next week or so I want to make sure that they don’t go offline, I just want to feel good about it.”

This is one of those great squishy design things. It’s not a complicated engineering feature. It doesn’t require any special modeling. It’s a rather simple thing to build in, but if we really get to know our customers, and of course, all these decisions should be informed by research that you’re doing with your users because they may not care about some of this stuff. I am giving you generalizations.

But you might find out that it’s just simple; I just want to be able to pin these items here, and I want to make sure that they’re always right on the dashboard when I log in, at least for a while. So don’t make them dig for that kind of stuff if it’s not necessary. Okay?

And that’s it. Those are the eight strategies for IoT monitoring applications. I hope they were useful. If you want some further reading on this topic, I’ve got a couple links to share with you, I do have a free “Designing for Analytics Self Assessment Guide.” This is not just for IoT applications and cloud data products, especially in that kind of health monitoring space, but there are some things in there that might be helpful, so you can get that at designingforanalytics.com/theguide-one word.

I also have what I call my “CED UX Framework” for designing advanced analytics applications. This really gets into how I like to think about presenting insights from predictions or even traditional analytics, and how do we figure out how much data and evidence to show? When do we show it? How do we enable drill-downs? And all these kinds of design decisions. If that sounds like it would be helpful to you, you can go read about this framework. The CED stands for conclusions, evidence, and data, and it’ll go more into what that means. That is available at designingforanalytics.com/CED-just the letters CED.

And then finally, if you need personalized help, two quick things: I do run my public seminar twice a year. It’s called “Designing Human-Centered Data Products.” So, at any time, you can just head over to designingforanalytics.com/theseminar and get on the early access list there and I’ll shoot you an email when registration opens.

And then the final thing here is if you already have an IoT product or analytics application and you’re finding that it’s really complicated for users, they’re not seeing the value in it that you think is there, maybe it’s hard to sell, maybe it’s hard-it’s just not getting the adoption you want, you can hire me to come in and do an audit and to actually assess what’s going on with the design and come up with a remediation plan. So, this would be a design remediation plan: what features need to change? How would the user experience change? Dashboard, visualizations, all that kind of stuff. It only takes about a week or two to get done, and then you can oftentimes my clients can run with their own team to implement the solution, implement the recommendations themselves, or they can hire a contractor, or we can keep working together, or whatever.

But my goal there is to really rapidly turn around some changes for you so that you know, how can I make this design better beyond just lightweight aesthetic changes that really don’t move the needle in terms of having the business impact that we want to promise with these types of IoT cloud applications and things like that. So that’s guaranteed and again, only takes a couple of weeks. So, if you’re interested in that, the link for that service is designingforanalytics.com/theaudit-just T-H-E-A-U-D-I-T.

Okay, so again, this is Brian O’Neill, Experiencing Data. Thanks for hanging out with me. Stay safe and healthy, and if you have a question for me, suggestion for the podcast, or comment, you can always leave an audio question right through your browser. Just head over to the podcast homepage designingforanalytics.com/podcast. I’m always interested in hearing from you and trying to make these episodes as useful as possible. So, until next time, see you soon.

Other Episodes

Episode 0

January 03, 2019 01:00:44
Episode Cover

003 - Mark Madsen (Global Architecture Lead, Teradata Consulting) on the common interests of analytics software architecture and product design

In Episode #003, I talked to Mark Madsen of Teradata on the common interests of analytics software architecture and product design. Mark spent most...

Listen

Episode 0

June 04, 2019 00:41:38
Episode Cover

014 - How Worthington Industries Makes Predictive Analytics Useful from the Steel Mill Floor to the Corner Office with Dr. Stephen Bartos

Today we are joined by the analytics “man of steel,” Steve Bartos, the Manager of the Predictive Analytics team in the steel processing division...

Listen

Episode 0

June 16, 2020 00:44:05
Episode Cover

041 - Data Thinking: An Approach to Using Design Thinking to Maximize the Effectiveness of Data Science and Analytics with Martin Szugat of Datentreiber

The job of many internally-facing data scientists in business settings is to discover,explore, interpret, and share data, turning it into actionable insight that can...

Listen