Home » The Merits and Ethics of Predictive Policing

The Merits and Ethics of Predictive Policing

Abstract: The following document provides a critical examination of the debate surrounding predictive policing. The paper begins with the context of questionable data-driven management practices, continuing into a discussion of the results of a randomized control trial case study, which examines the accuracy and effect of a predictive policing algorithm on arrest rates broken down by race. The discussion argues in favor of the potential behind predictive policing, however a short digression acknowledges its limitations. Next, theories of crime and accurate models that capture them are mentioned, which further advocates for the potential. The arguments in favor are complemented by section which stresses the danger of racial discrimination behind the data that is used for predictive policing algorithms. The paper concludes that, to reach a holistic conclusion on the topic, requires a case by case analysis taking the composition of a specific predictive policing model into account.

 

With the recent boom in the applicability of artificial intelligence and machine learning techniques, as well as data-driven management that uses these tools, there has been increased concern over the ethical implications of such practices. To give an example, information about convicts was used by the Florida judicial system to predict the likelihood of recidivism “as a basis for determining the length of their sentences” (Asaro 43). Another example encourages gambling habits in young adults by modifying online game matchmaking, a term called “dynamic matchmaking,” so that it pairs them with heavy spenders. A third example, and the focus of this paper, is the use of crime analytics and data processing algorithms which are utilized by police departments in the United States to forecast crime, a term called “predictive policing.” It has become a hot, polar topic in the recent years considering that the data collection and the training of the algorithms involved is a highly delicate procedure that has far reaching consequences, and some are saying that it could potentially cause the fourth amendment to be overwritten because it calls into question the idea of “reasonable suspicion” and “probable cause” (Ferguson 261-262). Opponents of predictive policing are saying it self-reinforces racial biases, causes distrust in communities which further proliferates negative attitudes, ultimately resulting in an unstable solution. Proponents, on the other hand, argue that the insights derived from those models can actually improve decision making, resulting in more accomplished arrests, all in the while of being racially indifferent.

One study by Brantingham et al. examines arrest rate differences between crime forecasts done by a crime analyst, and that by a computer algorithm in a randomized control trial to shed light on the existence of racial disparities. Every day, police officers across three LAPD divisions were randomly assigned a map of target areas before going on patrol, either that produced by the algorithm (treatment group) or that by a human expert (control group). The data collected, which was then analyzed through hypothesis testing, were arrest counts and arrests per crime, broken down by race, division, treatment, and control. It is notable to remark that the officers were not told whether the maps distributed to them were done by a human or a machine, that this was a repeated measures design, meaning the same officers participated throughout the study, and that the divisions did not all participate at the same time (only with some overlap). The conclusion drawn from the tests was a failure to reject that treatment arrests are independent of ethnicity (p-value 0.94337, Cochran-Mantel-Haenzel test), meaning there isn’t enough statistical evidence to rule that the algorithm was racist (4). Additionally, there were statistically significant differences between the number of treatment and control arrests in the targeted areas, more than double in fact for each division, with the algorithm outperforming the expert (maximum p-value across divisions is 0.015, CMH test) (4-5). Lastly, the authors investigated whether differences found between the treatment and the control in the arrest rates are accounted by overall higher crime rates in the target areas. However, there were no statistically significant differences once number of crimes were included, besides one division where the number of arrests per crime was actually lower for the treatment group (4).

The results of the paper advocate for the use of predictive policing, and the methodologies used to demonstrate the evidence were sound overall. However, it is important to recognize that the arrests in the paper primarily constituted burglary, car theft, and burglary theft from vehicle (Brantingham et al. 2), and that this study was taken place in Los Angeles, where historically speaking, represent “as much as 60% of the serious crime in the city” (Brantingham et al. 2). Another important thing to understand would be the inner workings of the algorithm in depth, and compare it to other algorithms used in predictive policing, as these results fall under a specific implementation, and no two algorithms are alike (Pearsall 17). The latter and former facts should be taken into consideration, as some predictors used in the algorithm may not necessarily be powerful enough for other types of crimes, and in other areas; the paper speaks for only those demographics. Additionally, arrests themselves “are an imperfect proxy for other types of police contacts including stops, searches and detentions short of arrest” (Brantingham et al. 5). It could be that the algorithm may induce the differences discussed when considering for these types of encounters, and the authors admit this as a limitation of the study.

Although the paper is limited in the types of crimes, demographics, and encounters it examines (to name a few), the statistical evidence points in favor of the increased accuracy and racial indifference potential that predictive policing can exhibit. There is more to draw on this point. The paper also features some background on the models that are currently used. One of these is a space-time Hawkes process, a type of probabilistic model governed by a “self-exciting” intensity function, which describes the rate of increase in chance dependent on previous occurrences in time. If an arrest has taken place not a while ago, the intensity function “jumps” accordingly, so the chance of a further arrest is heightened, and then decays over time until another arrest occurs. If it never occurs, the intensity function diminishes exponentially towards zero. This type of model was used in the Los Angeles Predictive Policing Experiment (Brantingham et al. 2), and surprisingly describes some types of crimes really well, especially those considered in Brantingham et al study. For example, the theory of crime known as “near repeat theory” “posits that once a particular location has been subject to a crime, it is statistically more likely that that location and the close environs will be subject to additional, similar crime events during a brief time frame after the initial crime” (Ferguson 277). This theory holds up and has been studied numerous times—the phenomena that recently affected areas “advertise their vulnerability” is real, and appears to be contagious. Proponents argue that with the right technology armed with this kind of knowledge, it could prevent and deter about 15 percent of burglaries in a given area (278). Moreover, there is prevailing evidence that merely the “presence of police in a given place removes opportunities for crime even without any direct contact with potential offenders” (Brantingham et al. 2). Thus, pinpointing recently affected areas based on the type and nature of the crime committed, then attending to those areas has its merits, and this is something reaped via predictive policing.

On the other hand, without explicitly factoring race into these models, there is a possibility that racial bias can occur, especially since other variables “act as proxies—such as socioeconomic background, education, and zip code” (Barczyk). For example, with the hawkes process model, if the times of arrests factored into the model are those based on arrests that are racially biased, the intensity function will increase in a given area, leading to a greater likelihood of police resources being deployed to the affected area in the future, which in turns may lead to more arrests in the area, causing a feedback loop (Brantingham et al. 2). Therefore, it matters what kind of data is being collected and included in these models (Pearsall 19), and how they are interpreted, and misuse of those models may result in skewed decisions, perpetuating things like systemic racism (Barczyk). Another issue is that requesting the inner workings of these predictive policing algorithms is not a transparent process, as police divisions are generally reluctant to release that sort of information (Barczyk). Moreover, this statement is alarming when one takes into account that there are models that we do know of which have been trained on data collected 20-30 years ago, and regions outside the United States, such as Canada and Europe, where the proportions of African Americans, for example, are far less (Barczyk). From a statistical standpoint, those algorithms are trained to make decisions based on nonrepresentative data which is also out of date. Therefore, we run into an anomaly where those kind of models may pick up coincidental correlations, assigning a significantly increased probability of attempting a crime for black people, and in areas which are moving away from crime, simply because a black person is a rarity in the data.

Predictive policing is a powerful tool that must be properly handled. In some cases, the algorithm, the data, and the predictability of the scenario come together to deliver impressive results. In one real life example, predictive policing was used to reduce random gunfire occurring on every New Year’s Eve in Richmond, Virginia, where it enabled the police to “anticipate the time, location and nature of future incidents” (Pearsall 17). Once officers were placed in the predicted locations, they were able to “prevent crime and respond more rapidly,” so that there was a 47% decrease in random gunfire, and over a 200% increase in the number of weapons seized, altogether saving around $15,000 in personnel costs (Pearsall 17). So, more can be done, with much less.

However, predictive policing can be abused as well, as is the case when it completely automates human decision making, and when people don’t consider how hurtful the steps needed to ensure everything is right can be. This issue is magnified when a nation-wide call to create budget cuts for police departments is put in picture, since it creates an overreliance on these tools (Barczyk) (Pearsall 17). It is difficult to deem all such implementations of predictive policing as bad, however we shouldn’t readily dismiss any potentially racist concerns either. The steps needed to remove the barrier on both sides would be to allow for greater transparency on behalf of the police.

 

 

Works Cited

_____________________________________________________________________________________________________

Asaro, Peter M. “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care.” IEEE Technology and Society Magazine 38.2 (2019): 40-53, Print.

Barczyk, Franziska. “Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.” MIT Technology Review. MIT, 2020. Web. 22 Oct. 2020.

Brantingham, Jeffrey, Matthew Valasik and George O. Mohler. “Does Predictive Policing Lead to Biased Arrests? Results from a Randomized Controlled Trial.” Statistics and Public Policy 5.1 (2018): 1-6, Print.

Ferguson, Andrew G. “Predictive Policing and Reasonable Suspicion.” Emory Law Journal 62.2 (2012): 259-325, Print.

Pearsall, Beth. “Predictive Policing: The Future of Law Enforcement?” NIJ Journal 266 (2010): 16-19, Print.