In this Episode Jeremi and Zachary along with – discuss how large quantities of data are used in surveillance and how they may be used to heighten inequalities for certain communities.
Sarah Brayne is an Assistant Professor of Sociology at The University of Texas at Austin. In her research, Brayne uses qualitative and quantitative methods to examine the social consequences of data-intensive surveillance practices. Her book, Predict and Surveil: Data, Discretion, and the Future of Policing (Oxford University Press), draws on ethnographic research with the Los Angeles Police Department to understand how law enforcement uses predictive analytics and new surveillance technologies.
In previous research, she analyzed the relationship between criminal justice contact and involvement in medical, financial, labor market, and educational institutions. Brayne’s research has appeared in the American Sociological Review, Social Problems, Law and Social Inquiry, and the Annual Review of Law and Social Science and has received awards from the American Sociological Association, the Law and Society Association, and the American Society of Criminology.
Brayne has volunteer-taught college-credit sociology classes in prisons since 2012. In 2017, she founded the Texas Prison Education Initiative.
Guests
- Dr. Sarah BrayneAssistant Professor of Sociology at the University of Texas at Austin
Hosts
- Jeremi SuriProfessor of History at the University of Texas at Austin
- Zachary SuriPoet, Co-Host and Co-Producer of This is Democracy
[0:00:03 Speaker 1] This’ll is Democracy, a podcast that explores the interracial inter generational and
[0:00:11 Speaker 2] intersectional unheard
[0:00:12 Speaker 1] voices living in
[0:00:13 Speaker 2] the world’s most influential democracy. Yeah,
[0:00:21 Speaker 1] welcome to our new episode of This is Democracy. This week we’re going to discuss one of the biggest topics of our time, a topic that is with us in all sorts of discussions around foreign policy around policing around co vid economic aid, the question of big data, how large quantities of data are used in our society, how they can be used and particularly how they’re used for surveillance purposes to surveil the ways in which we behave the way we interact. Who we talk. Thio Uh, this is a fundamental question for our democracy because there’s a potential for big data to provide assistance on provide for more equality in our society. There’s also the danger of data being used to heighten inequalities and differences among among us in our society, and we have the opportunity this week to talk Thio the person who I think is doing the most important cutting edge in this cutting edge research in this area. Sarah Brain Sarah has just written a fantastic new book that I highly recommend to everyone called predict and surveil data, discretion and the future of policing. We’re going to talk to Sarah about her research and about this book. Sarah is an assistant professor of sociology here at the University of Texas at Austin, and her research, really, it’s extraordinary. She looks at qualitative and quantitative methods to examine the social consequences of data intensive surveillance practices. So she really uses both qualitative and quantitative ways to understand how big data is used and misused in. Our society has written extensively on these issues, and her book is really an encapsulation of a lot of the groundbreaking research she’s done. It’s also a very readable introduction to these issues. Sarah, thank you so much for joining us today.
[0:02:11 Speaker 0] Oh, thanks so much for having me
[0:02:13 Speaker 1] before we turn to our discussion with Sarah. Of course, we have Zachary’s scene setting poem Zachary. What’s the title of your poem today? Does the
[0:02:22 Speaker 2] algorithm understand poetry?
[0:02:26 Speaker 1] I’d like to hear the answer to that. Let’s let’s hear the poem The algorithm comptel
[0:02:32 Speaker 2] the time of day when shoplifting Zehr most likely to occur theology a rhythm, contract you through a hidden camera and know exactly when you will urinate in an alleyway, the algorithm can see you as a dot nothing mawr, and the algorithm can count up the number of midnight helicopter rides that will scare you into safety. But does the algorithm understand poetry? Does it know what it is like? A. The other end the way the sun almost seems inimical as the dirty glass door jangles open into the corner store, or what it’s like to stare your life in the face and see it stripped down naked, right in front of you as a stick of gum. Under a cash register role of lottery tickets, the algorithm contract your license plate across the country. Theatrical rhythm can watch you step out of the car onto a bridge or walked with snowdrift and deposit yourself within the cold, Hard truth, and the algorithm could be there with you to search the indices for the relevant definition of crazy. But does the algorithm understand poetry? Does the algorithm truly know what it means to be so moved by a verse that you would end up on a different coast, reciting Ginsburg under a palm tree or searching an old neighborhood for the words of a song on a cassette tape that you happened to have
[0:03:53 Speaker 1] taken
[0:03:53 Speaker 2] unnoticed from a junk shop. So what? How can the algorithm stand there and tell me of the law of men? When the algorithm does not understand poetry? The algorithm indeed has seen the music fall effort effortlessly into your pocket in the back of the mildewed shop. Return it. It admonishes you beneath your palm tree. Don’t you move an inch. It commands you from the other side of the dirty window. Stop. Why should you get to be human?
[0:04:26 Speaker 1] I love the imagery there, and I love the algorithm. Speaking back at the end. Zachary, what is your poem
[0:04:31 Speaker 2] about? My poem is really about the sort of disconnect between big data and emerging technology in our world and the humanity
[0:04:39 Speaker 1] that it’s supposed to govern and somehow somehow chart. Well, Sarah, this takes us right into your space.
[0:04:48 Speaker 0] That was so cool. Thank you so much for sharing that Zachary.
[0:04:52 Speaker 1] That was really well done, Zachary. Really, really Well done. Uh, Sarah, When? When we refer to big data when Zachary speaks of algorithms here. What are we really talking about? What What? What is this stuff?
[0:05:04 Speaker 0] Yeah, there’s all of the’s definitional debates and just sort of thio Get everybody on the same page early on in the book. I I just say, you know, I’m using this term Big data to refer thio massive and diverse data sets meaning large quantities of data typically that come from a range of different institutional sources rather than just one source, um, and then the analytics that are associated with it. So there’s, you know, you can use predictive algorithms. Um, in order thio make decisions. There is other types of advanced analytics you can use network analysis, geo analysis, that type of thing. Eso It’s sort of this catch all term basically to describe the analogy, the collection and analysis of massive amounts of data that a human being on their own is unable to collect and process.
[0:05:52 Speaker 1] And is this a new phenomenon?
[0:05:54 Speaker 0] Well, so the use of data to make decisions generally is not a new phenomenon at all, and and the use of data in governance is not a new phenomenon at all, But the sort of big data era is made possible or facilitated by the mass digitization of information. So you know all of these digital trails that we leave in our everyday lives that has really exploded over the past 10, 15 years. And that sort of is what makes this big data environment possible? I think so. It’s new in that sense,
[0:06:24 Speaker 1] And one of the points your book makes so well from Page one is that the data does not have prime a facial wisdom it it doesn’t give objective answers. What is subjective about it?
[0:06:38 Speaker 0] Yeah, so that’s sort of the idea. Basically, that data does not necessarily speak for itself. It’s not just like a mechanical reflection or a mirror reflection of what’s going on in the world. And I think that actually Zachary’s poem gets at this in a way is that data captures certain things right certain measurable, observable behaviors and things that we dio. But there’s a lot of of sort of like intangibles in the lived experience that are not captured by different data points and therefore just don’t make it into the corpus of data that we analyze in order to make decisions. And so I think that there’s fundamentally based on who’s collecting the data, what kind of collect data collection you’re doing. And also, there’s some really interesting art about missing data as well, which I think is sort of an important point to, um that means that data is sort of fundamentally social. It just like social experiences. Sort of is better at worst, capturing certain things than others.
[0:07:39 Speaker 1] And you make that point so well throughout your book, particularly the conclusion where you say it just the way you said it, that the data is social. It might look objective. It might look mathematical, but its social. What are the implications of, you know, what seems like a simple statement, but I think is an enormously almost radical move on your part.
[0:07:57 Speaker 0] Well, I think that if we don’t consider data a social, if we do consider it sort of objective, unbiased, the ground truth, then what happens is all of the social processes that shape data collection become obscured. And so who cares? Like why do we Why do we care if any of that happens? It means that you know the patterns of decision making that police make are rendered invisible. It means that the way that credit bureaus, um, collect our information and and either, you know, make it possible for us to get good or bad terms on our loans. All of that is rendered invisible. And so what ends up happening if you don’t consider data as this social product? This this social resource is you end up kind of missing the whole social and institutional side of the story and missing how the use of data for decision making can kind of obscure and amplify some existing inequalities in society.
[0:08:53 Speaker 1] Is this what you mean by the term tech washing, which you use in the book that I have not encountered before?
[0:08:58 Speaker 0] Yeah, I can’t take credit for that term, but I heard it somewhere, and I think it’s sort of a riff off of greenwashing. The idea of different products being, um, sold is sort of environmentally friendly. Um and so this idea of tech washing is sort of the idea that that if you saturate things or have this veneer of of mathematics quantification, um, computation on top of things that there’s this veneer of objectivity that comes with it,
[0:09:28 Speaker 1] and in fact, it can It can hide intentionally. Some of the biases that are built into the ways you’re collecting and using your data. Is that
[0:09:35 Speaker 0] Yeah, absolutely.
[0:09:37 Speaker 1] So one of your chapters that I found really fascinating. But also horrifying is where you talk about Dragnet surveillance. Yeah, and here’s well, sure. What same. Or about what? That is in your analysis.
[0:09:50 Speaker 0] Well, Dragnet surveillance is just the idea of the surveillance or the collection of data about everybody, rather than just those that are under criminal suspicion. So we typically think of the police, you know? Yeah, they stop somebody and they write down some of their information on a contact card and put them into the system. Or we know, of course, anybody who’s arrested their data is going to be in police systems as well. But Dragnet surveillance is the idea that even if you don’t have any contact with the police and you’re not under suspicion just as we go about our day to day lives, our data is collected.
[0:10:23 Speaker 1] And what’s so concerning about this? I mean, it sounds similar to Google, you know, collecting data on my searches on my on my computer.
[0:10:31 Speaker 0] Yeah, I mean, so I think that if there is no reason for concern necessarily unless you sort of care about the inherent value of privacy, which is a different, different question. But there’s not necessarily a cause for concern. If you believe that the human beings that then make decisions based on that data and the human beings that collect that data and the human beings that make policy decisions really consequential decisions for individuals lives make all of those decisions and data collection efforts without any error without any bias without any prejudice. And this is sort of the idea of, like an infallible state that nobody ever makes any mistakes. And that’s just not borne out in research. Of course, like all of us make mistakes all the time. And research suggests that we make mistakes, actually, and really patterned ways as well. And regardless of whether or not there’s ill intent, that consort of occur and so why this matters, though, is that it’s very difficult to put your finger on exactly where or whom you can hold accountable for the errors or for the unequal effects of the decision making based on these data.
[0:11:35 Speaker 2] So what it sounds like, you’re saying, is that the big
[0:11:38 Speaker 1] data obscures
[0:11:40 Speaker 2] the humanity into data points while still relying on in perfect human beings to interpret that data.
[0:11:48 Speaker 0] Yeah, I think that’s a really good summary. There’s a couple of terms to kind of capture what you’re suggesting. There There’s some people talk about digital doubles or digital doppelgangers. Where in, you know, when institutional actors were making decisions about us these days, like, what are the terms of our mortgage gonna be? Are we going to get released on bail? Etcetera, etcetera. It’s not so much making decisions based on us as some sort of like holistic human being. Instead, we’re making decisions based on our digital doubles and even more kind of benign things around. You know, what movies does Netflix recommend us? That’s not because, like, you know, they know in our heart and our soul, our cinematic preferences. It’s based on what we’ve watched in the past, those air, the observable things, and so decisions are made based on what is observable on the human in the loop aspect of it, um, is often rendered invisible.
[0:12:38 Speaker 1] And what’s so cool about your work, at least for me? Sarah and I think for many of our listeners is the’s big concepts you’re talking about. You apply them directly to policing for this book that you’ve published, you spent a lot of time with the Los Angeles Police Department looking at exactly these issues in practice. First of all, how did you make this connection to the Los Angeles Police Department?
[0:13:01 Speaker 0] Well, so I didn’t have any personal connections to the LAPD or tow law enforcement. Generally, I’d actually never even been to Los Angeles before I started my fieldwork. But when I decided, you know, that I wanted to explore how Big Data Analytics was playing out in the law enforcement context, Um, I decided that I wanted to study a police department that was quite technologically advanced. And so the first thing that I needed to do is figure out what police department I wanted to try and obtain access to. So actually, prior to that, I had only done quantitative research using like, existing survey data. But there just wasn’t really any data on police use of big data. So I quickly realized I needed Thio collect my own. And after doing some exploratory work, I narrowed it down to Chicago PD, NYPD or LAPD, where with the three departments I was interested in, which I guess is kind of unsurprising and so faras, you know, there the largest, the best funded, that type of thing and therefore the most technologically advanced. And so I tried to gain access to all three, and I ended up finding this organization the center for At the time it was called the Center for Policing Policing Leadership in Equity. Now it’s the Center for Policing Equity, headed by a psych prop, Phil Goff, who was at U C. L. A. At the time, and it basically partners, researchers and police departments. And I sort of asked them to introduce me to someone in the LAPD. Um e I mean, it was quite difficult to get access to all of the people in all of the different divisions that I needed to talk to in order to try and provide us full of picture as I could. But that was sort of my first point of contact within the department, and it was just one meeting. And so I moved out to L. A for six weeks based on one meeting and decided, you know, I’m just gonna try as hard as I can to talk to everybody that I possibly can, and I’ll see how this goes. So at the end of the meeting, I was like, you know, thank you for your time. Is there anybody who couldn’t take me on a ride along and he was like, Yeah, okay, sure. You go with this. Sergeant went on a ride along ride along is, you know, basically a seven hour interview. And, you know, he mentioned all of these different people on divisions. And I said, You know, can I get their email information or can you introduce me? And so I did. Just like what Qualitative researchers called snowball sampling for the first six weeks. Just cold, calling people cold, emailing people loitering around division offices. You know, I’d finished an interview and then just wander around the hallways until somebody asked me if I was lost, and then I do another interview. So it was really, like an uphill battle in that sense. But after that initial round of fieldwork, access was more straightforward because I had talked to so many people by then that, um, other folks in the department were more open toe talking to me. my presence was kind of like accepted, in that sense,
[0:15:48 Speaker 1] right? It’s such an important point that I often discussed with my graduate students s o. Much of research is hanging around loitering and just looking to make connections. You don’t know what’s gonna hit, but
[0:16:00 Speaker 0] you
[0:16:02 Speaker 1] have to just keep trying and eventually you make a connection, and then it builds to other connections.
[0:16:07 Speaker 0] Exactly.
[0:16:08 Speaker 1] One of the most interesting parts of your work to me was where you were describing some of the ways in which the LAPD uses technology and big data in ways I had never thought of. I think most people don’t realize you talk about this program pollen tear that they use. And there was a section I actually wanted to read a paragraph from your book where you talk about how they go from this. This gentleman, Doug is his name goes from 140 million vehicle records to 13. They’re looking for a particular car on, and he said he went on to show me how to look up which of the 13 had any citations or arrests, the divisions in which they received their citations or arrested and identify one person who had been cited in the same division. Which robbery occurred. If the person ended up not being the person who committed the robbery, officers could simply save the search formula and keep running it in the coming days, just in case any new data came in, I asked What happened? You? Sarah asked. What happens when the system gives a false positive? And and here’s the punchline, of course. What happens to the wrong suspect? You ask? Doug said bluntly. I don’t know. What are we supposed to make of this?
[0:17:18 Speaker 2] Well,
[0:17:20 Speaker 0] I mean, this was something that, as a researcher, I really wanted to observe. I wanted to observe what happens when there’s a false positive. Basically, when the data trail leads you to focus on somebody who ended up not actually committing the crime that the police suspect them of, and this ended up being, like, actually really difficult. Thio observe in the sense that it sort of just manifested in like investigative dead ends. And a lot of the time, you know, I wouldn’t be with the detective when they would go knock on this guy’s door, for example, And so but I was able to sort of see a little bit in terms of when I went on ride alongs like what happens when somebody would have, ah, high criminal risk score, for example, in this, this kind of like a person based predictive policing formula that they would use bond? Like sometimes guys that get stopped on the street 34 times a day on day wouldn’t be in the commission of any crime or anything like that. There’s not a warrant out for their arrest, but they would get stopped sort of over and over, Um, by virtue of sort of having a high, high risk score. And I didn’t do research with those community members themselves. But lots of other people have done done really incredible work on that, And this can sort of undermine one’s perception of the legitimacy of the criminal legal system or even the government. More broadly, if you’re sort of constantly getting Harang when you don’t think you’re doing anything wrong,
[0:18:44 Speaker 1] well and And one of the implications I took from your research is that the data is reinforcing and driving bias because they collect more data on particular communities on then. That data gives them higher risk scores and then they because there’s more data from the places that they suspect of committing criminal activity. They collect coordinator from them and therefore they have more data to follow up on. And so they’re more likely to be stopped. Harassed, etcetera. Is that a fair reading of this?
[0:19:14 Speaker 0] Yeah, absolutely. I mean, it’s It’s very much the sort of, like quantified, self fulfilling prophecy, wherein, if you, you know, continue sending your police resources, whether that’s patrol cars or officers or whatever to the same areas, or to police the same people, you’re more likely to detect criminal activity in those areas, whether it’s a true or false detection of it, um, which then is going to create the arrests area the arrest statistics to justify allocating more police resources to that area, you know. And so it becomes this this self fulfilling prophecy or this feedback loop, where you can kind of get a ratchet effect where the risk scores and an allocation of police resources become sort of decoupled from like actual criminal behavior. It’s as much a reflection of enforcement practices as it is actual, um uh, criminal offending.
[0:20:03 Speaker 2] What degree of transparency is there in these departments, What can the public know about how their police department is using big data?
[0:20:12 Speaker 0] Yes, So, I mean, in theory, they can know through filing public records requests in this type of thing. But you need to know exactly what toe ask for when you are filing these public records requests. And so really, you know, like if there without research, that sort of identifies like the names of these programs, this type of thing, you don’t even know what to request. Like what contracts to request between the police department and private companies that design the analytics software that the Police Department uses and that sort of thing. So, you know, there are many community groups, of course, and community members that have long known that the police are conducting surveillance. Um, but sometimes is difficult. Thio No, really, the ins and outs of exactly how these programs happen, and I think that that’s one of the main challenges. Two Democratic oversight of policing today is that big data policing is largely invisible, right? It’s not always just like police saturation a bunch of cops on a street corner where you can point your finger and say, You know, this neighborhood is being heavily policed. ER is being policed unfairly or in discriminatory ways. Big data policing is largely invisible, and so that transparency, which is a first and necessary step to accountability, um is really hard to get.
[0:21:24 Speaker 1] And it’s particularly difficult for the communities that are targeted to even know where to begin, because there are communities that have fewer resource is right. Yeah,
[0:21:32 Speaker 0] absolutely, absolutely. I mean, there is, like, some some really impressive community organizing going on. But absolutely, you know, like to a certain extent that there’s just such a power imbalance, Um, between the communities being policed and then the police department.
[0:21:47 Speaker 1] So once you describe this as you do in the book on and I’ve seen you talk about it elsewhere, you’re getting a lot of attention, as you should for this work. It seems so obvious. Actually, Once once described, it seems actually intuitive that that the data is being collected on particular people who they want to find data on who then are more likely to be found to be performed. Doing criminal things are more likely to be suspected. That bias that racism and this seems so obvious. How do the police defend this?
[0:22:17 Speaker 0] Yeah, that’s an interesting question. So, you know, I think that, um on the one hand, when faced with the exact same information, people can have really different reads of it based on their position in an organization. So, like I mentioned in a previous, um, response here about the person based point system right where individuals are assigned, there’s a formula for individuals getting criminal risk scores assigned to them. They get, like, five points for, um, a prior arrest with a handgun, five points for gang affiliation, or gang association, this type of thing, and then they get a point added to their risk score for every police contact. And when I asked individuals in the LAPD, you know, like so how do you know if this is effective or not? Or why did you start using this system in the first place? Or like, isn’t this just basically codifying racial profiling or quantifying racial profiling? Their response would be like, Oh, well, the evidence that it’s effective is that 80% of people who are on our criminal risk score list are chronic offender list and of getting rearrested within, um, five years. And so for them that was evidence of the programs. Efficacy. When I present that exact same finding, two different audiences, they’re like, No, that’s evidence of the self fulfilling prophecy that is going on here. And so I think, like, really, you know, depending on what you’re the institution that you’re in and your goal in your institutional imperative, you can have a really different read of things. And a lot of the time, this kind of quantified policing can actually be very helpful to law enforcement agencies and officers encountering claims of racism or bias Where they can say, you know, well, I stopped the guy because he has a high criminal risk score. Not I stopped the guy because he was in this particular neighborhood or I stopped the guy because he was like a 21 year old black guy, right? So there are legally defensible and not legally defensible reasons, um, to stop somebody. And race, of course, is not legally defensible. But if you’re able to say like, Oh, no, it wasn’t his race like it’s his it’s criminal risk score, which is this color blind? Um air that comes with numbers that can actually be very helpful in In, um, uh, doing your daily operations is a cop,
[0:24:37 Speaker 1] right? But that seems terribly deceptive at least to me, because if you’re more likely toe have a higher risk score because of your race or because of where you live. There’s a bias built into that data.
[0:24:48 Speaker 0] Absolutely. But I think that, like data literacy, even in the general public, is not necessarily that high. And they certainly don’t teach anything about any of this in training academy. Um, with with the police and so the, you know, I would often ask officers like, Oh, can you can you define an algorithm for me and this kind of thing and like some of their answers are great. They’re amazing and, like, they’re they’re not at all correct. But I don’t think that they, you know, I had people talking to me about Luigi boards and full moons and witchcraft, and you know, all of this kind of stuff. And, uh, and and so, yeah, I think that these questions that you’re raising around like, oh, well, isn’t there bias embedded in the data, etcetera, etcetera. That’s not necessarily something that resonates with them that they’re familiar with or that they have is a critique, some due to be clear. There definitely were some, um, folks, particularly in management positions within the department, like there’s one captain in particular that that I’m thinking of. That did have quite a nuanced understanding, um, of these things. And I do think that it will grow increasingly part of how police officers are trained, etcetera. But right now, it’s just also knew that they’re like this. There’s this new mysterious technology that we’re supposed to be deploying and the it’s like, unbiased. And it’s just I mean, as one officer described it, it’s just math, right?
[0:26:08 Speaker 1] Right. And of course, math is supposed to be objective. People think eso one of the parts of the book that really struck me, though, and maybe runs a little bit against the goodwill that you hope and we all hope is there is that many of the police you interviewed themselves. They don’t want this data collected on them. You have a whole chapter on police push back. So that chapter indicated to me that in fact, they don’t believe this data is Yeah,
[0:26:37 Speaker 0] Yeah, yeah, that. I mean, that was sort of the funnest chapter to write in the sense that it was very unexpected. And this is like part of what I like about ethnographic fieldwork is unexpected. Things come up, you know, police officers, reactions to their own surveillance that’s made possible by the digital trails that they leave with big data policing. Yeah, was not part of the original research plan, but it came up on my very first ride along. You know, we pulled up to this house, Andi. I saw the officer manually type in that he was code six at this particular address, meaning, like we had arrived at the location and we’re responding. And, um, I was like, Man, I picked innately feeding because they’re super technologically advanced. I thought, like, don’t they have some automated mechanism for knowing where their cop cars are? And so I asked the sergeant about it and he said, Oh, yeah, well, every vehicle is actually equipped with a navy l or an automatic vehicle locator that pings the location of the car every five seconds, but they’re not turned on because of the police officers union. So it was like in that moment that I realized, you know, there’s really a labor story here is well on this idea of, you know, if you have nothing to hide, you have nothing to fear. You know, let the data speak for itself, etcetera, etcetera. That idea totally evaporates when the police are the ones who are themselves under surveillance.
[0:27:55 Speaker 1] And and so what do you take from that? Is implications for thinking about these issues.
[0:27:59 Speaker 0] Well, I mean, I think that there’s a few things like first, is that this idea that if you have nothing to hide, you have nothing to fear. We’ll look into that falls flat, Um, when the surveillance is actually turned on you. And importantly, you have the political power, the organizing power to resist that kind of surveillance as well. Um, and secondly is sort of the idea that, like data, is sort of this unbiased, objective reflection of things that is never going to be misused by individuals and power. It’s never going to lead Thio disparate impact, unequal outcomes, this type of thing. I mean, it just really does not play out that way on the ground. When the surveillance was turned on the police. There was massive resistance. There was organized resistance in terms of the union. There was, like more sort of piecemeal resistance where they were like ripping antennas off their cars. This type of thing, um, foot dragging. They would often use their cell phones to communicate with one another instead of going through dispatch, because there’s an official record of dispatch communication. Um, there are all sorts of, like surveillance, thwarting behaviors That would be, um, evidence of criminality if, like a regular civilian was doing this kind of stuff in the police’s eyes. But in their eyes it was just kind of considered part of the job, or like a natural reaction to coming under surveillance.
[0:29:25 Speaker 1] And so I think this takes us to to our final question, Sarah, and this is this is asking you to to go a little bit beyond at least what you’ve published in your research. I know you’ve thought deeply about this. Where do we go from here? There. There’s such concern in our society and controversy from different corners about policing. There are many who believe our policing system has historical problems of racism and by that have become worse or at least have continued on. There are others who think our police don’t get enough support on then There are a lot of people who, of course, probably the majority people are somewhere in between and uncertain how. How does your research help us get move forward on these issues? Where can we go? Is a democracy where we’re going to need police forces? But we want our police to be effective, but also fair. How can your research help to inform that discussion going forward?
[0:30:14 Speaker 0] Yeah, that’s a key question. And I mean, unfortunately, I finished writing the book sort of before three events of last summer with the killing of George Floyd and when sort of like, defund and abolish discourses really made it onto sort of the national scene. But, um, I think that what what you’re communicating their capturing is this tension that exists between abolitionists and reformers. So if you think most everybody would say that some change needs thio occur in policing, whether it’s not getting more or less resources do their job differently. The same not do the job, but all whatever. But the tension between abolitionists and reformers basically is like. On the one hand, abolitionists would argue that policing is a fundamentally racist institution has been since the beginning of time for hundreds of years. Therefore, let’s stop trying to tweak the system. It’s just obviously going to reproduce inequalities because that’s, you know, a feature, not a bug. Its what it was designed to dio. Reformers, on the other hand, are basically like look, local law enforcement agencies were here to stay. Let’s see if we can police better and use evidence or whatever in order to sort of make make improvements to the practice of policing. And I think that, like what I’ve seen occurring in the context of this big data stuff or or data intensive policing is a lot of problems in policing, whether it’s racial bias, whether it’s mis allocation of resources or insufficient resources, etcetera data is being kind of proffered is the solution. It’s like, Okay, uh, too much racial bias and officer decision making. Let’s automated using algorithms de funding the police and we need to more efficiently allocate resources. Let’s use predictive algorithms in order to identify where we should allocate the resources. And while I think the I understand the impetus behind that and behind movements towards evidence based policing and whatnot. I think that if we are going into it with this assumption that we can solve fundamentally social problems with technical solutions or with data based solutions, that’s kind of a false promise. And it’s not gonna work because I mean even a Zachary indicated, like data is a reflection of pre existing inequalities and issues that exist in society. And so if we think about prediction, is basically learning from the past in order to project about the future, whether that’s an imagined, better future or not, you’re gonna have all of these different biases already baked in. And so I think my inclination would be, um, towards focusing on what’s that expression like, If you have a hammer, everything looks like a nail. Right now, we’re focusing so much on how we can allocate police. Resource is t due to police. Better to do policing better. I think we should think a little bit more about how we can reallocate resources to police less to need police last to direct non punitive resources, a swell thinking more holistically that, ironically, police reform, I think, is gonna have a lot to do with what we’re doing. Outside of the context of police departments as well, we have sort of an anemic welfare state in many ways, in the US that there there are other things that I think we can use data thio, direct resources towards
[0:33:38 Speaker 1] spoken like a good sociologist that we have to think about the larger social structure in which police operate it makes. It makes perfect sense, and there’s no doubt the place of police has grown in our society in 20 to 30 years on many of the problems that the police are asked to address, or that we tell them to address are issues that they’re not the appropriate institution to address. And I think you say that very well, and I think your analysis goes very well in that direction. Zachary, listening to this as a young person I know cares deeply about policing and and and about the future of our society. What do you think? I mean, do you see Sarah’s analysis helping your generation to rethink or re imagine policing in other parts of our society?
[0:34:20 Speaker 2] I think that that young people my age are really growing up without the sort of same status quo assumptions about American policing on whether you think that’s good or bad, I don’t think that my generation is going to go forward thinking that American policing needs to stay the way it is. I think that we’re really in a moment when we can shape the way that law enforcement operates in our society. And I think that’s because there is a willingness and a recognition that change does need to happen.
[0:34:49 Speaker 1] Mhm e think thats well, said Zachary. And I think Sarah, I think your work is so deeply grounded in research but also has this real clear policy implication that that you’ve articulated so well and that I think it certainly inspired me. I know it inspire Zachary. I think it will inspire many of our listeners. Thank you for joining us today, Sarah.
[0:35:11 Speaker 0] Oh, thank you so much for having me in for your program,
[0:35:14 Speaker 1] and I want to encourage listeners Thio by Sarah’s book and read it again, predict and surveil, its very readable, very short but packed, packed with important information. I also wanted to say I save this for the end, but I think it captures how how Sarah is an intellectual angle j really out there involved. She’s She is a volunteer teaching college credit, sociology classes in prisons here in Texas. I believe she did this in New Jersey as well, and she found that the Texas Prison Education Initiative if you’re interested in that, I’m sure Sarah would be happy toe to get you involved. Eso That’s it’s really wonderful. You’re doing that, Zachary. Thank you for your poem and most of all, thank you to our listeners for joining us for this episode of This Is Democracy. This podcast is produced by the Liberal Arts Development Studio and the College of Liberal Arts at the University of Texas at Austin. Theme music in this episode was written and recorded by Harrison Lemke, and you can find his music at Harrison Lemke dot com. Subscribe and
[0:36:24 Speaker 0] stay tuned for a new episode every
[0:36:26 Speaker 1] Thursday featuring new perspectives on