Predictive Policing: Institutionalized Racism in Law-Enforcement Algorithms

Robots may not kill us all yet, but the prejudicial data we feed to algorithms produces policing systems that cement systematic biases (Pixabay).

Robots may not kill us all yet, but the prejudicial data we feed to algorithms produces policing systems that cement systematic biases (Pixabay).

Brooke Tanner (SFS ‘23) is the editor of the Western Europe and Canada section and a contributing writer to the Caravel's Crow’s Nest. The content and opinions of this piece are the writer’s and the writer’s alone. They do not reflect the opinions of the Caravel or its staff.

The U.S. relies on mass data sets to identify potential criminals, victims, and crime scenes with a technique called “predictive policing.” Dozens of cities use one location-based algorithm, PredPol, to estimate where a crime is likely to occur. While police are increasingly using predictive algorithms to inform their targeting, their analyses depend on the quality of the data and the strength of their analytics. Several police departments in the U.S. depend on purely heuristic methods to interpret data output, such as “eyeballing” crime maps to identify hot spots, which can project officers’ existing biases. The “earthquake model” implies that, if a crime occurs at a certain location, more crime is likely to happen nearby, and similarly, if an individual commits one crime, they are more likely to commit a repeat offense. The earthquake model and repeat offense theory are both flawed, which is the first of many issues surrounding the rise in predictive policing.

While the Fourth Amendment in the U.S. protects an individual from unreasonable searches and seizures by the government, there is no legal restriction on third parties obtaining data and selling it to the U.S. government. However, predictive policing raises legal concerns in other areas: a group of senators sent a letter to Attorney General Merrick Garland asking if the Department of Justice has researched if predictive policing complies with the 1964 Civil Rights Act.

Civil rights advocates argue that predictive policing violates privacy rights by invasively collecting data, as well as allowing data collectors to make prejudiced conclusions based on incomplete data input. 

Most concerning to advocates is facial recognition technology. Facial Recognition Technology (FRT) worsens racial profiling and disproportionately impacts Black people. Studies show that leading FRT systems, when trained to identify criminals, have higher false-positive rates against Black Americans and women. Obviously, that’s discrimination and disproportionately sacrifices their civil liberties and privacy. Cisgender white men built FRT that privileges faces that look like theirs—predictably, FRT excludes different gender expressions—while magnifying existing civil injustices against minority populations. 

By aggregating bulk data on individuals without reasonable suspicion of wrongdoing, FRT encroaches upon individuals’ first amendment freedoms of expression, association, and assembly. It threatens fourth and fourteenth amendment rights to privacy by collecting data without consent. The Supreme Court has already ruled that it is unconstitutional to collect mass cell-site data without warrants. However, sixteen states allow the FBI to use FRT on ID photos to algorithmically find suspects. These FRT algorithms perpetuate harm against communities of color, impede our right to privacy, and have little constitutional merit.

Many policing precincts adopt AI technology to cut costs. (Flickr)

Many policing precincts adopt AI technology to cut costs. (Flickr)

Predictive policing does not work. In Broward County, Florida, only 20 percent of people flagged by predictive policing algorithms to commit new violent crimes actually did so in the two years after their initial arrests. If police departments already have higher arrest rates for Black people, then algorithms that incorporate that data point will perpetuate this systematic racism. High-profile police departments—Los Angeles, New York, Chicago—have adopted algorithms for predictive policing. Los Angeles and Chicago ended these programs over inconsistent data collection that disproportionately targeted communities of color. In New York, police doubled down on data collection and algorithm-based policing, further entrenching this “self-reinforcing loop [of algorithmic bias] over and over again,” according to Katy Weathington, a professor at University of Colorado Boulder.

One proposed remedy is a type of affirmative action, where future predictive policing models explicitly take race into consideration and program a higher risk threshold for minority populations than for white suspects. However, tools are currently legally forbidden from factoring in race, and so racial biases come through other strongly correlated factors, down to the exact zip code.

Other activists simply argue for ending predictive policing programs until we can eliminate algorithmic biases. The EU formally released its proposed AI (artificial intelligence) regulations, after an early draft leaked on April 14. These regulations allow states to continue to use biometric surveillance on citizens, and they place the responsibility for predictive policing and surveillance systems on software developers instead of on the governments using these tools. EU countries are already fighting back against these regulations; many have adopted and expanded their AI technology in policing. These guidelines may be less fruitful than the EU hopes, as it is up to these hesitant EU member states to establish enforcement rules and to interpret broad and sometimes vague statements. 

A terrifying example of the impacts of predictive policing is its use in the Chinese government’s genocide of Uyghur Muslims in Xinjiang. Systematic ethnic cleansing is made simple through FRT-based surveillance methods and other advanced AI systems. China has notably developed predictive software for police to collect data on social behaviors targeting Uyghurs and detain the population. China is not solely to blame for the development of its AI, as they have acquired predictive policing technologies from the U.S. and Germany as well as expressed interest in AI research from the U.K, Norway, France, and India.

In India, more than 400 cities use predictive policing from AI company Innovatiview. This company uses facial recognition technology from hundreds of cameras in cities to track behavior and identify individuals. Indian discrimination laws often exempt policing algorithms, which raises transparency and privacy issues concerning the algorithms’ biases as well as data storage.

Algorithms used in predictive policing, such as facial recognition technology, uphold racist systems of oppression and violence. Algorithms may be able to logistically police cities better than humans can, but we will not be able to implement objective algorithms until we can eradicate the existing biases and racism in the police force, which feeds the data. Internationally, countries and institutions are grappling with the best way to balance security and privacy; however, right now many are opting for the former.

Have a different opinion? Write a letter to the editor and submit it via thecaravelgu.opinion@gmail.com for publication on our website!

Previous
Previous

Compass Gender: Gender, Caste, and COVID-19 in India

Next
Next

Amid The Pandemic, The US and China Add Conspiracy Theories to Their Diplomatic Arsenals