Algorithmic Unfairness and Racial Injustice in America’s Criminal Legal System

Photo by Francesco Ungaro on

In America, we are currently living through a moment of heightened awareness of racialized police brutality and, in response,  the reawakening of a racial justice movement. In this same period, we’ve seen an incredible expansion in the use of data analytics and AI technologies in almost all industries, including the criminal legal system. No one asked us, but many of these technologies are already being used and have been for the last two decades. From predictive policing to bail and recidivism prediction to pretrial and parole/probation electronic monitoring, these technologies claim to reduce costs and take bias out of human decision making. But how valid are these claims? A lot of recent research and recent examples show that far from reducing unfair outcomes, these technologies serve primarily to replicate existing biases. When we have a criminal legal system as racially biased as that described in The New Jim Crow or Just Mercy, how can we possibly expect anything different? 

Let’s take a look at predictive policing. It is reasonable to expect law enforcement to not only respond to crime, but also to try to prevent it, which begs the question where should police departments focus their energies and resources? Now AI tools are available that can feed historical arrest data into an algorithm that will predict where crime is likely to happen or even who is likely to commit the crime. However, if we apply some sociological reasoning to this problem it becomes evident that the data the algorithm is trained on are extremely racially biased because historically police departments have overpoliced in poor communities of color and are more likely to arrest African Americans than Whites for the same offense. So where do you think the algorithm will predict more “crime”? And once more police are sent to that area, they make more arrests there, that data feeds back into the algorithm, and it predicts even more crime, and you get the idea. 

So there is already an overpolicing problem, but once police are on the ground, they are supposed to protect and serve. But as we’ve seen in recent weeks (and years and really from the start of law enforcement in America) police brutality against African Americans is rampant and disproportionate. How do we ensure that 4th amendment protections against arbitrary search and seizure are respected? How do we ensure that excessive use of force is prevented? One technology that advocates champion is body cameras, now that video processing and storage is cheap. But what if we use AI on top of those body cams and turn them from a police accountability tool into a powerful facial recognition system that is used to surveil those that are already being overpoliced or perhaps even those that are protesting police brutality? Concerns quickly escalate to issues of privacy, due process, and 1st amendment rights, and legislative proposals are finally cropping up to address exactly these issues. 

Once someone is arrested, how do judges set their bail or sentence? Over the last two decades, many police departments have started (and stopped) using algorithmic risk assessments that are meant to predict the likelihood that if released the person will commit a crime or not show up for their trial. These predictive algorithms have more recently been shown to disproportionately assign higher risk scores to Blacks than Whites despite similar criminal records and other characteristics. These risk assessments are also often used in the courtroom and play a role in determining jail time or whether to offer probation and for how long and what if any other social services to provide. In a rushed effort to mitigate human bias and reduce prison overcrowding and costs, authorities have introduced a “black box” into every step of the criminal justice process. Few risk assessments have gone through validity testing, let alone robust fairness testing. Those impacted by algorithmic decision-making rarely know it, let alone have the means to request transparency into the process and demand correction if the algorithm is found to be wrong or unfair. Advocates and researchers have not been able to get access to proprietary systems so that they can be impartially tested. Despite this lack of transparency, early research shows that even well-intentioned use of new AI technologies may be doing more harm than good. 

With prison overcrowding and skyrocketing costs, felt acutely during the Great Recession, some authorities are looking for ways to reduce the American prison population, which is the highest in the world. To that end, there has been a recent move toward “e-carceration”, using surveillance technology to control “criminals” outside of prisons right in their communities. While a potentially appealing alternative to needlessly locking people up, have we really thought through what it means to monitor human beings like cattle? Instead of reducing the number of people that are involved in the criminal legal system by undoing overpolicing, minimizing sentences and other punitive measures and exploring alternatives to imprisonment altogether, we are starting to create virtual prisons in our communities, and predominantly in poor communities of color. By shackling people with expensive monitoring devices during pretrial and parole/probation and restricting and surveilling their every move, we make it harder for them to find and retain work, take care of their families, and function in society with human dignity. Moreover, what human rights to privacy are people made to give up? Will their data be unilaterally fed back into other algorithmic systems that make false recidivism predictions? 

AI and advanced data analytics technologies do have the potential to improve our criminal legal system, but it is far from a silver bullet. However, technology alone cannot solve the racial and social injustices that plague the legal system because ultimately people have to decide which social problems are ones that deserve attention and resources. If we don’t prioritize fixing racial disparities (or start by at least recognizing them as a problem), we won’t apply technology to that problem. Worse, we will be blind to the racial biases that technology is reproducing because we are focused on other concerns like cost reduction, efficiency, and profit. For too long, White America has turned a blind eye to the devastating impact of a racialized criminal legal system on Black communities. If we don’t grapple with this fundamental problem, no new technology will prevent us from continuing to make the same mistakes.

**Whether to capitalize black and/or white is an issue that is again under debate. I capitalize both following persuasive recent arguments for capitalizing both and examples of doing so on the grounds that all racial and ethnic identities are socially constructed. Here, capitalizing white is not meant to convey respect but rather to prevent the naturalization and normalization of whiteness, which confers on to it undue power and impedes the examination of its role in maintaining and reproducing systems of racial inequity.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: