042 - Why Machine Learning and Analytics Alone Can’t Drive Behavioral Change inside Police Departments with Allison Weil

June 30, 2020 00:39:12
042 - Why Machine Learning and Analytics Alone Can’t Drive Behavioral Change inside Police Departments with Allison Weil
Experiencing Data with Brian T. O'Neill
042 - Why Machine Learning and Analytics Alone Can’t Drive Behavioral Change inside Police Departments with Allison Weil

Jun 30 2020 | 00:39:12

/

Show Notes

“What happened in Minneapolis and Louisville and Chicago and countlessother cities across the United States is unconscionable (and to be clear, racist). But what makes me the maddest is how easy this problem is to solve, just by the police deciding it’s a thing they want to solve.” – Allison Weil on Medium

Before Allison Weil became an investor and Senior Associate at Hyde Park Ventures, she was a co-founder at Flag Analytics, an early intervention system for police departments designed to help identify officers at risk of committing harm.

Unfortunately, Flag Analytics—as a business—was set up for failure from the start, regardless of its predictive capability. As Allison explains so candidly and openly in her recent Medium article (thanks Allison!), the company had  “poor product-market fit, a poor problem-market fit, and a poor founder-market fit.” The technology was not the problem, and as a result, it did not help them succeed as a business or in producing the desired behavior change because the customers were not ready to act on the insights. Yet, the key takeaways from her team’s research during the design and validation of their product — and the uncomfortable truths they uncovered — are extremely valuable, especially now as we attempt to understand why racial injustice and police brutality continue to persist in law enforcement agencies.

As it turns out, simply having the data to support a decision doesn’t mean the decision will be made using the data. This is what Allison found out while in her interactions with several police chiefs and departments, and it’s also what we discussed in this episode. I asked Allison to go deeper into her Medium article, and she agreed. Together, we covered:

Resources and Links:

Quotes from Today’s Episode

“The folks at the police departments that we were working with said they were well-intentioned, and said that they wanted to talk through, and fix the problem, but when it came to their actions, it didn’t seem like [they were] really willing to make the choices that they needed to make based off of what the data said, and based off of what they knew already.” – Allison

“I don’t come from a policing background, and neither did any of my co-founders. And that made it really difficult to relate to different officers, and relate to departments. And so the combination of all of those things really didn’t set me up for a whole lot of business success in that way.”- Allison

“You can take a whole lot of data and do a bunch of analysis, but what I saw was the data didn’t show anything that the police department didn’t know already. It amplified some of what they knew, but [the problem here]  wasn’t about the data.” – Allison

“It was really frustrating for me, as a founder, sure, because I was putting all this energy into trying to build a software and trying to build a company, but also just frustrating for me as a person and a citizen… you fundamentally want to solve a problem, or help a community solve a problem, and realize that the people at the center of it just aren’t ready for it to be solved.” – Allison

“…We did have race data, but race was not the primary predictor or reason for [brutality]. It may have been a factor, but it was not that there were racist cops wandering around, using force only against people of particular races. What we found was….” – Allison

“The way complaints are filed department to department is really, really different. And so that results in complaints looking really, really different from department to department and counts looking different. But how many are actually reviewed and sustained? And that looks really, really different department to department.” – Allison

“…Part of [diversity] is asking the questions you don’t know to ask. And that’s part of what you get out of having a diverse team— they’re going to surface questions that no one else is asking about. And then you can have the discussion about what to do about them.” – Brian

Transcript

Brian: Welcome back, everybody to Experiencing Data. This is Brian O’Neill. I’m very happy to have Allison Weil on the call today. Allison and I met pretty recently, somehow—I forget how, but her Medium post came across my radar, and I was like, “Whoa, this is so on the money for so many of the things we talk about on Experiencing Data about creating human-centered data products.”

This is also very timely, because we’re going to be talking about law enforcement, and specifically policing, and how we use data to help with police officers, and training, and interventions. And she’s going to be telling her founder’s story about how she created a product in the space—or was working on a product in this space, and why she had what she called a failure. I think it was a real—there’s a lot of takeaways and learning here. And so, Allison, welcome to the show. You’re a senior associate at [Hyde Park Venture Partners], but previously, you had a stint as a founder. So, tell my audience about this story that you wrote on Medium. Give them an overview of what we’re going to talk about.

Allison: Sure thing. And Brian, thanks for having me on. I’m really excited to have this conversation. Yes, so before I was an investor, I worked as a founder while I was in business school, and I was working with some researchers at the University of Chicago to commercialize research they were doing to identify police officers at risk of misconduct or just any other issue, whether it was alcoholism, or risk of suicide, or complaints of any sort. And the summary of it is that I was not able to commercialize it for any number of reasons. And as I go over in the blog post, the first is just product-market fit. There was a lot of data, but you didn’t really need machine learning or any sort of advanced statistics to solve the problem.

The second is really problem market fit. The folks at the police departments that we were working with said they were well-intentioned, and said that they wanted to talk through, and fix the problem, but when it came to actually their actions, it didn’t seem like that, in that they weren’t really willing to make the choices that they needed to make based off of what the data said, and based off of what they knew already. And then, the third piece was, frankly, founder market fit. I don’t come from a policing background, and neither did any of my co-founders. And that made it really difficult to relate to different officers, and relate to departments. And so the combination of all of those things really didn’t set me up for a whole lot of business success in that way.

Brian: Yeah. The other thing I wanted to say about this just for folks, I really think people should go out and read this article, and I’ll definitely have this in the [00:03:19 show notes]. I really applaud your willingness to go write this article. You use the word failure in it, and you’re willing to talk about what didn’t work. And what I really took away from this was that the data here and the quote, “correct analytics” was not the problem here.

This is about behavior change. We’re talking about policing today. This is a sensitive subject here, but it goes to show that we can’t just focus on the technology piece and, as you talked about, not every problem needs machine learning necessarily. That had nothing to do with it, whether it was Tableau, or whatever it was going to be. That sounded like the data was more than accurate here, the behavior change piece is a very different thing. So, what was the big learning you had—tell me about this behavior change light that maybe came on, the realization of what you were facing, like, “Oh my gosh, we have the products nailed, but it’s never going to stick.” That must have been an experience—I don’t know—to feel that.

Allison: Yeah, I think you really nailed it. I think, at the end of the day, if you take a step back, we’re talking about it in terms of policing, but you see it in healthcare; you see it really in any sort of area where there’s experts, and there’s people who are working in everything every day. It’s unlikely if you just asked them what their gut reaction is, that their gut reaction is going to be wrong or different than what the data says. And so, you can take a whole lot of data and do a bunch of analysis, but what I saw was the data didn’t show anything that the police department didn’t know already. It amplified some of what they knew, but it wasn’t about the data.

It wasn’t like, “Oh, there’s this brand new insight that your advanced statistics showed us that we didn’t know already.” So, that light went on. I was like, “Okay, well you know this already. You know what the data is going to show before we even give it to you because you live and work this problem every single day. But if you know it already, then why aren’t you doing anything about it? And why are you coming to us to solve it in a really complex data-driven way, when really what you need to do is just change your behavior.” And it was really frustrating for me, as a founder, sure, because I was putting all this energy into trying to build a software and trying to build a company, but also just frustrating for me as a person and a citizen who—you fundamentally want to solve a problem, or help a community solve a problem, and realize that, gosh, the people at center of it just aren’t ready for it to be solved.

Brian: Yeah. So, can you unpack this thing? Someone let you guys into the station. Someone let you guys sit down and show chiefs of police or something—there’s a disconnect here, and I want you to talk about how you got in front of these people because someone thought you were going to—either they were just entertaining you and saying, “Well, we had some people in that had this software and it didn’t help.” I don’t think that’s probably what it was. Someone thought they were going to get something out of your thing. Or maybe they—I don’t know. Was it a surprise to them? Tell me about that. Something doesn’t add up there.

Allison: Yeah, absolutely. So, we were working—I worked across two different major urban police departments. I’d rather, for their privacy and the privacy of a bunch of other folks, rather not reveal exactly who they are, but trust me that it was two large urban police departments. And we were working directly with the technology innovation staff there, and working at the chief of police level and presenting them with this information, and working with them. It did have that level of visibility.

And they invited us in because they actually didn’t have a way of getting this information easily. So, as I explained in the blog post, if you think about the data that would go into predicting whether a police department has issues with certain officers, you would have often HR records, you would have complaint records, you would have arrest records, and then you might have information about the areas that they patrol, and crime levels, and demographics, and whatever other information you would need to figure out what’s actually going on with how the officer is working. And they didn’t have all of that—they had all of that information, but they didn’t have all of that information in one place, and they didn’t have all of that information in an easy to use, easy to see summarized format. They may had have some kind of rudimentary, early intervention system that only used one or two of those pieces of data, but they genuinely thought that if they could get all this information in one place and use advanced analytics—partly because it was just the thing that everybody was talking about, and gosh, if you’re in technology and innovation, then you should be using machine learning—

Brian: Wait, is the “they”—sorry, I got to interrupt you—is that “they” here the innovation group or is the they the chiefs of police, saying we don’t have a single pane of glass to look at this info?

Allison: Both. It was really both. The core motivation was, “We want to be able to quickly see which officers we want to worry about.” Neither the innovation team nor the chief of police had that easily accessible. And then the innovation team thought that working with us would solve that problem, and working with us using machine learning would solve that problem for their chief of police that was asking for something like that.

But you know what we learned—what I said is, so getting all that data in one place is the first part of any analytics problem, but the second part of any analytics problem is just taking a summary look at that data: just normalizing it, looking at averages, looking at running basic Excel regressions or whatnot, nothing fancy, to give you a better sense of what direction you could go in. And if we had just stopped after that second step, and then put a dashboard on top of it, it probably would have gotten them 95 percent of anywhere that they needed to be.

Brian: Interesting. So, did you guys ever pose the question to them about, like, “Okay, so you want this single pane of glass, and you want to know which officers we may need to have an intervention with, or to send to training, or something. So, what are you going to do with this—okay, so the answer is here are the top five by worst offenses, or whatever, and it’s John, Sally, Jane, Roger and Victor. What are you going to do now?”

Allison: Yeah. So, we asked them that a few different times, in a few different ways. And for them, it really depended, and they hadn’t actually gotten that far. They knew they wanted to do something about it, but they were very worried—and rightfully so—about matching the intervention to the issue, right? Because we wanted to find—and it’s important to think about this—we wanted to find not just officers that we have been reading about and seeing on the news that have unlawfully used force against civilians.

That wasn’t their only concern. They also wanted to find officers who their fellow officers thought were alcoholics because they have complaints that are internal as well. Or they wanted to find officers where they, maybe, needed some mental health support. And so, the interventions for different kinds of officers needed to look different depending on what the need was, or where the shortcoming was. An officer that has been using force on a regular basis might be suspension, or firing, or retraining, or whatever it is they chose to do. An officer that they thought might be an alcoholic, it was rehab, or something else. It just looked very different.

Brian: And so did you measure that this wasn’t going to be viable because they decided not to buy this product? Or through the process of actually prototyping, and maybe showing them their own data, at what point did you realize like, “Whoa, this is not going to be used as decision-support information. It’s just not going to.” Did they verbalize that? Did you just wait for a purchase that never came? When did you know?

Allison: So, it happened over a period of time. It wasn’t just an instant thing. I would say there were a few decision points. The first was frankly, the expense. I saw to build a machine learning solution is extremely expensive. You need to get all of the data in one place, you need to hire data scientists and software developers, you need to customize the models to fit each of the departments because, by the way, every department defines complaints differently, every department defines use-of-force differently, every department has different policies, and so you need to build the models to customize off of that, along with a bunch of other stuff. And they each use different systems to record their information. And so, to build that system is very expensive. And I don’t know if you’ve heard, but government as a whole does not like spending a whole lot of money on innovative technology on a regular basis.

And so, the first thing I realized was there was a mismatch between the price point and the cost it would take to develop the solution against their willingness to pay for a solution. They probably weren’t willing to pay, even with how much the lawsuits are, asking them for, say, a million dollars a year was probably a pretty high price point, but that probably would have been what we wanted to charge given how expensive it was to develop it, and customize it for each department. So, that was the first part of the problem where I realized that this wasn’t going to be something that would work for them. At least the approach that we were taking at the time.

The second part was roughly around the story that I tell in the blog post about Batman, where we were working with one of these departments, and we came back with a list of officers that we had identified as high risk, and one of them was an officer that the department internally referred to as Batman because he genuinely thought he was Batman, and acted like he was Batman, which might be what you want in a comic book character, or in a film, but it’s not what you want in a police officer patrolling on the street. And the officer was employed; he was still working for the department; they knew he was a problem; they knew he was Batman, and when you come at them and say, “Hey, here is one of your most problematic officers. What are you going to do about it?” And then they say, “Oh, well we already knew he was a problem. Yeah, he’s still employed.” That demoralized me in a pretty major way where I was like, “Okay, well, great. What’s the decision that we’re going to help you make here if that’s the case?”

Brian: Yeah. So, talk a little bit about who Batman was, and if you could also share—one of the things I think, correct me if I’m wrong here—my cousin is a police officer; solid guy. The thing that I thought I read in your article is from at least the departments that you looked at, this is a case of there’s a few bad apples that are pretty bad. And there’s a lot of officers that don’t have any complaints against them. Is that a pattern that you were seeing? Hopefully, you were seeing?

Allison: Yeah, I mean, that’s the pattern we saw across the board. Most officers—easily more than 50 percent—don’t get any complaints against them. A lot of those officers might be central office or might be working in quote-unquote, “easier areas,” but a lot of officers in difficult areas still don’t get a single complaint against them. And that’s really, really common. And a few officers get many, many, many complaints against them. And the best way to predict if an officer is going to have a future problem is if they’ve had a problem in the past. If an officer has had a previous complaint from the public about use-of-force, then it is highly, highly likely that they are going to have a problem with use-of-force again. If they’ve previously gone on a chase that they weren’t supposed to go on, they are very likely to go on a chase again, if one of their colleagues has previously noticed an issue with alcohol use, then they are very likely to have a second colleague that noticed an issue with alcohol use.

And none of that’s surprising, and that’s a pattern that you see a lot across all sorts of people’s behavior: people don’t really change that much. And so yeah, I would say that pattern is very true. And often what we were seeing was that these issues were manifesting in the early years in someone’s career. It was relatively unusual, what we were seeing in the data, was somebody’s first complaint was coming through in their 10th year of service. It wasn’t that they were a great officer for 10 years and then gosh, all of a sudden, they are starting to rack up a bunch of use-of-force complaints in the next few years. It’s often that it’s early in their career that they start these patterns, and then you just don’t do anything about it, and they keep going. Whereas if that intervention happens early in their career, then you can do something about it. And that’s the pattern that we saw in the data.

Brian: Got it. And that pattern you said you saw across more than one station?

Allison: Oh, yeah. I mean, yeah. We were looking at thousands of officers. And so we saw that throughout the data set we were looking at.

Announcer: Here’s what people have been saying about the designing human-centered data products seminar.

Participant: A pretty quick and accessible snapshot of how to think about user design for data solutions products. I mean, you can go super deep on UX design; you can talk about data solutions, but how do you combine the two in a very digestible way? I think it’s a very, very solid framework to get started.

Brian: Interested in joining me and a group of fellow students in the next cohort? Visit designingforanalytics.com/theseminar for details and to learn how to get notified when pre-registration begins. I hope to see you there, and now back to the episode.

Brian: So, this Batman character, I found that really interesting. Was there a frank moment where Batman appeared on the screen, and there was a conversation with the chief, or something, about these characters, where the light went on where you knew they simply weren’t going to do anything about this character? And did you see that repeat at other departments?

Allison: Yeah, so when I heard about Batman, it was actually in a conversation—a phone call, I remember—with our main contact at the department, which wasn’t the chief, which was somebody in the technology innovation office, and she didn’t know about Batman. She worked in technology and innovation. She didn’t have a sense of who the officers were, so she was actually repeating a conversation to me that she had had when she brought the list to their senior leadership that actually knew the officers on patrol better than we did. And so it sounded, frankly, she was as confused and as worried as we were. Where she didn’t know that this officer was on the force prior to us bringing it to her, but it also wasn’t her job to tell the chief of police, or the head of patrol what to do with the officer. She was only there to bring the list to them. And so, we had a conversation I don’t remember—again, this was a few years ago—I don’t quite remember what she said about what they were going to do, but I don’t believe this was a brand officer, I believe this was an officer that had been around a little while, and the way—I definitely remember her repeating the conversation of really it just being like, “Yeah, I asked them, and the person who responded was just like, ‘Oh, yeah, that’s Batman.’” as if it was just an everyday thing to talk about. As if it was not a problem at all. Or funny.

Brian: And in your data, we’re all thinking about the killing of George Floyd right now, which is, really, just one major incident that represents a real systemic problem here. I’m wondering, was there a race component to the data that was here? And I’m curious, were you dealing with minority chiefs of police as well during any of these interactions, or was it primarily white? Tell me about the race component with this whole story.

Allison: No, that’s a valid question. So, we were very careful about use of race data in our analysis. I don’t remember-and apologies, again. It’s been a few years since I looked at this data set—

Brian: Sure. That’s okay.

Allison: —and so I’m going to detail. I don’t remember there being actually a substantial race component to the data set that we are looking at, but I’m not sure. The other question around chiefs of police. Yes, one of the chiefs of police that we were working with was a person of color. I believe he was Black. But I don’t believe that the leadership of the department and the race of the leadership of the department made a difference there whatsoever. As far as their commitment to any of this. We were working with departments that, again, they were forward-thinking. They wanted to be innovative, and they were working with us because they said that they wanted to do the right thing, and I keep, again, using those words pretty definitively of, “Said they wanted to do the right thing” to emphasize the difference between that and actually doing the right thing, but I believe we were just very, very careful about the use of race data. We wanted—for a variety of reasons—at the earliest stages where we were working with a department to not necessarily include that information.

Brian: Got it. I’ve been thinking a lot about diversity, including in the guests on my show, and when I even think about what you just said, it makes me wonder, would a person of color or a Black person working on that team have felt the same way about whether or not race data should have been part of the product, or the way it was presented or not presented? Where does it fit into the bigger picture? These are questions I start to ask more, lately. I don’t know about you.

Allison: Yeah, no. So, I think that it should be included, and it undoubtedly should be included. I believe that our clients—again at the stage—I think if we had moved forward further; I think if we had really gone into full commercialization and brought this forward as a product, I would have insisted that be part of the metrics, because it should be, but it’s not—I think the way to answer the question—a better way to answer the question is, if I’m remembering correctly, we did have race data, but race was not the primary predictor or reason for it. It may have been a factor, but it was not that there were racist cops wandering around, using force only against people of particular races. It was that there were officers who were much more likely to use force than other officers. And those officers were disproportionately impacting people of color if that makes sense.

Brian: Yeah. No, I understand. And I’m not an advocating for either—I don’t know, I think it just warrants the question for all of us, as we work on some of these types of tools, is having the period to reflect on these questions, and I think diversity, especially now, as we move into artificial intelligence and machine learning, we’ve seen the famous incidence of where race has an impact on facial recognition software, whatever it may be, this diversity thing is—it should have been more important than it has been, and now it’s, I think the technology is really helping surface some of these problems. So…

Allison: Absolutely. I agree with you, and I saw it there, and I saw it in the other startup that I worked for which also used a lot of person-level analytics, where there are major blind spots in data. And frankly algorithms right now, are frankly open source. The best machine learning, the best analysis is open source. And the core differentiator is the data that you have, and how do you use that data that you have? Which means that you always have to have an examination of what that data is; of what its biases are, and how do you make sure that it’s giving you a comprehensive and accurate look at the world that you’re trying to analyze?

And so removing race in any way from that data set means that you’re going to get an incomplete answer to many, many, many different questions. Or assuming race in that data set means you’re going to get an incomplete answer to many, many questions. Or, just not thinking about it at all, and not thinking that race makes a difference. You talked about facial recognition: that’s a prime example where when you don’t think about race in the data set that you’re working at, you get results that are just wildly inaccurate. And so that’s very, very true as you think about how do you apply the algorithms to the data sets you have. It’s most important to take a look at what is the data set that you are actually working with.

Brian: Yeah, and part of the diversity thing, too, is asking the questions you don’t know to ask. And that’s part of what you get out of having a diverse team is that they’re just going to surface questions that no one else is asking about. And then you can have the discussion about what to do about it, but I think that’s a really important element of all this.

Allison: Very, very, very much so.

Brian: I have a couple other questions. First of all, you literally did what I call a ride-along. It’s a research technique, a user experience research technique we sometimes deploy in the field where we go and shadow someone doing their job and I call them ride-alongs whether you’re in a vehicle. You were actually in a vehicle—

Allison: I was.

Brian: —doing research. Tell me about your ride-along, and I love this behavior. I love seeing it happen, and it doesn’t happen enough in the data science and analytics community, as far as I can tell. Tell me about that.

Allison: Oh, man. Honestly, I went on two, and they were honestly probably the most valuable part of designing the system that I had because it contextualized everything for me in a way that was really important. One, I was riding along with patrol officers. So, I was riding along with people who were in my data set, and people who would be using and impacted by the analysis that we were doing. And then, too, just contextualize the data. If I’m looking at a bunch of stop data, and arrest data, and officer data, what are stops like? What is a good stop versus a bad stop? There’s an anecdote from this that I didn’t include in the blog post that I think is actually pretty relevant.

So, I was riding along with one of the officers, and we go to a call at a house, and a number of other officers are there, probably three or four officers. And it’s a weird call to a family that the department already knew and basically a visitor at the house had called the police to help him with a couple of residents of the house. But there was no crime being committed, and there was no crime that was committed. But the interaction got a little tense.

The person who called—the cops were like, we can’t do anything here. There is no crime, there is nothing going on. And I completely agreed with them, but because they weren’t doing anything, the person who called them was pretty unhappy about that, and he was under the influence of some sort of substance. And got up, started yelling a little bit, was clearly vocally unhappy, threatened to file a complaint. I think he got actually called up the department while we were there to do that, and the officers responded, not super calmly. They tensed up, they started yelling a little bit back. They were not—they weren’t de-escalating is what I would say. They weren’t escalating, and they certainly didn’t use force or anything like that, but it was also the sort of interaction where you were like, just walk away, just—like, this isn’t getting you anywhere; this is going nowhere; this guy is going to be unhappy with you, and he’s kind of drunk, and there’s nothing you can do at the scene, and you’re yelling back at him. And this isn’t a use-of-force.

I don’t know if he actually ended up filing a complaint. And frankly, what he wanted to file a complaint about was not that they were yelling at him, but was that they weren’t intervening in a situation that they shouldn’t be intervening in. They were doing the right thing, and he wanted to complain about that. And how does that show up in the data? And how should that show up in the data? Because you have a situation where the officers are actually, from an arrest perspective, definitely doing the right thing. From a complaint perspective, may get a complaint filed against them, but they’re getting that complaint filed against them for a reason that is not actually why I, as an outside observer, think that they should have a complaint filed against them.

Brian: Did that change the way you approached any of the product design choices when you saw that?

Allison: So, it made me spend a lot more time thinking about how the complaints were filed in the first place and the process for filing complaints. And this goes back to what I was saying before about what your data is. I leave it as a footnote in the article that the way complaints are filed department to department is really, really different. And so that results in complaints looking really, really different from department to department and counts looking different. But how many are actually reviewed and sustained?

And that looks really, really different department to department. I spent a lot more time thinking about what actually is the data we’re looking at, and that is what made it actually even more expensive and difficult to design a system to handle this because if a complaint in Chicago, and a complaint in New York look—they might both be filed with the same complaint code, but they’re actually very, very different actions, and they went through very, very different processes to get there, and they went through very, very different processes after that complaint was filed, then they’re not the same complaint. And so I definitely spent more time thinking about how you should handle two complaints that might look the same on the surface, but are actually quite different from each other.

Brian: Yeah, I think this is so telling. So, there’s a lot more information that’s not necessarily captured in the data set you have, and by going through the exercise that you did, you can start to inform how might we change the presentation, or the experience, or whatever it may be about using our product, based on knowing this? And this may be something where the police officers themselves, or the chief of police, they may not have thought about it this way themselves because they don’t have that outside perspective. It’s just like, “Oh, damn. I got another complaint filed against me, I’m screwed. And there’s nothing I can do about it.” And maybe a product person or da—you could say, “Well, you know what? We have this information. We can actually separate these into two categories, whatever it may be, and then split that out so that we’re making better decisions here. We’re not just treating them all as apples.” So, I think it’s really great that you’re doing this behavior, and we’re seeing it in action. So, thank you for sharing that. [laughs].

Allison: Yeah, very much. I think another, to re-emphasize that point because I actually think it’s really important, in some places when a citizen files a complaint, the department goes and reviews it. And let’s say—I’m making up the numbers here, but more than 50 percent of those complaints are sustained and are permanently on the officers record. But in other places, a citizen will file a complaint, and the review process is such where 5 percent of complaints are sustained and on the record. And the citizen, on average, isn’t any different, but the department process is. And so if two different departments are saying, well, I only want to look at sustained complaints, what does that do to your data, as well? And are you actually effectively measuring how the citizens view the police departments? Or are you just measuring the process by which complaints are sustained?

Brian: Right. What would you do today—like, “Okay, I got fired from my job and no longer at the venture capital company, is there a time to bring this product back,” and what would you do differently if you were getting the band back together again? Is there an opportunity to do that and what would you change or how would you approach it now?

Allison: Great question. So, actually—and this is the other part that I didn’t put in the blog post. So, actually, a lot of the technology has been picked up by another team. So, there is a modified version of this product in the market. Again, I don’t have the consent of the company to talk about it so I don’t want to name the company that’s doing it, but the leader of that company is somebody with a lot deeper content knowledge coming from a policing background, and a police executive background than I have.

And I met with them a couple of years after I stopped doing this, somewhat coincidentally, actually, and at that point in time, they had come to a lot of the same conclusions that I had around everything we’ve been talking about. So, I know that they’re doing quite well, and I believe that they’ve made some product changes—or at least that was the direction that it seemed like they were going in—to make product changes to 1) make sure that they’re measuring actually the right thing in a way that’s simple, and presented really well to the departments in a way that they need it to be, and because they come from that world more than I ever did, are able to have those conversations more frankly and work with departments and the unions more effectively than I was able to.

Brian: Got it. Allison this has been really great. Thank you for writing this article and being so open about your experience here. I just, kind of, final question. Do you have any advice for what I call data product managers, and heads of product, heads of data science and analytics that have some kind of productization responsibility, value responsibility with data? What would you—what are your takeaways? And this can go beyond just this one incident. You’re obviously seeing a lot of projects and ideas float into your office and across your desk. So, what would your closing advice be for them?

Allison: Yeah, I think it comes down to a few different things. The first, what we were talking about is think about what the process was to collect the data that you’re working with, and where there might be holes in that process that are reflected in your data. The second is, figure out what your customer actually needs. Yeah, it might be fancy analytics, but fundamentally solve the problem that the customer has, rather than solve the problem that you want to be solving because you can get lost a lot that way. And then the third, it’s something that all product managers who are better than I, do every day, which is just straight-up listen to your customer. Ask them what they care about, and make sure that you’re addressing their needs in a way that makes sense to them. And any data product person—any product person should be doing that. But on the data side, it’s definitely think about how does the data get to you, and what’s the process it takes to get there?

Brian: Got it. Good tips. Good advice. Allison, where can people follow your work? Is there social media, or obviously you’re on Medium. [00:36:59 I’m going to put that up there], but—

Allison: Yeah, absolutely. So, my Twitter is @inalittleweil, W-E-I-L for some punny there. And then you can find me on LinkedIn, and those are probably the two best places, as well as Medium.

Brian: Awesome. Cool. Thank you so much again for coming on the show and telling us your story. This has been great.

Allison: Absolutely. Thanks for having me, Brian. Really appreciate it.

Other Episodes

Episode 0

May 07, 2019 00:42:33
Episode Cover

012 - Dr. Andrey Sharapov (Data Scientist, Lidl) on explainable AI and demystifying predictions from machine learning models for better user experience

Dr. Andrey Sharapov is a senior data scientist and machine learning engineer at Lidl. He is currently working on various projects related to machine...

Listen

Episode 0

December 17, 2019 00:50:37
Episode Cover

028 - Cole Knaflic On Data Storytelling, DataViz, and Why Your Data May Not Be Inspiring Action

When it comes to telling stories with data, Cole Knaflic is ahead of the curve. In October 2010, she wrote a best-selling book called...

Listen

Episode 0

April 07, 2020 00:45:34
Episode Cover

036 - How Higher-Ed Institutions are Using AI and Analytics to Better Serve Students with Professor of Learning Informatics and Edtech Expert Simon Buckingham Shum

Simon Buckingham Shum is Professor of Learning Informatics at Australia’s University of Technology Sydney (UTS) and Director of the Connected Intelligence Centre (CIC)—an innovation...

Listen