top of page
Search

AI IN THE JUSTICE SYSTEM

  • diegorojas41
  • May 24
  • 3 min read
ree

What if you walk into a courtroom and discover that that day your freedom will be decided by a computer. Not a person who knows your story, but a machine trained on old data and math - an algorithm - you can't see or question. Well, this is not science fiction, man. It’s happening right now.


Artificial Intelligence (AI) is quietly entering courtrooms around the world. It’s used to assess if someone is likely to commit another crime, determine who should get bail or parole, and even predict where future crimes might happen. The idea sounds promising: fast, neutral decisions powered by data. But the reality is far more troubling. Why?


Well because…


1. Biased Data Creates Biased Decisions


AI learns from the past. But what if the past was unfair?


In the U.S., a tool called COMPAS is being used right now to predict how likely someone was to reoffend. It turns out to be more likely to wrongly label Black defendants as dangerous, and white defendants as safe, even when the opposite was true.


So AI, instead of fixing injustice, it learned it. Then spread it.


2. The “Black Box” Problem


Most AI systems don’t explain how they make decisions. Even the judges and lawyers using them often can’t understand the logic behind the results.


This means a defendant might get a longer sentence or lose parole, and neither they nor their lawyer can truly challenge it. What does that mean to the idea of a fair trial?


3. Predictive Policing: Watching the Wrong People


Some police departments use AI to predict where crimes will happen or who might commit them. But these systems often target poor neighborhoods and minority communities, because they’ve been over-policed in the past.


In cities like Chicago, people were flagged as “high risk” just because of who they knew, not because of anything they did.


4. Mistaken Identity with Facial Recognition


In Michigan, a man named Robert Williams was arrested in front of his kids because facial recognition software thought he looked like someone else. He was Black. The technology made a mistake. But it still cost him time, stress, and dignity.


Facial recognition is less accurate for people with darker skin. No shit, for real? For real. Yet it's still being used by law enforcement across the world.


5. AI Welfare Watchdogs Gone Wrong


In the Netherlands, an AI system called SyRI was used to detect welfare fraud. It ended up unfairly targeting immigrants and low-income neighborhoods. The public outrage was so strong that the system was shut down in 2020—declared a human rights violation by a Dutch court.


Why This Matters Now


AI is not evil. It’s a tool. But when used in the justice system without transparency, fairness, or accountability, it can become a weapon of injustice. that´s why we have to ask:


Who built this system? What data did it learn from? Can people question or appeal its decisions? 

Who is responsible when it gets it wrong? 


Until we have solid answers and real protections, AI should never replace human judgment where freedom or fairness is at stake. Don´t you agree?


Final Thought


Justice requires empathy, context, and conscience. AI has none of these.


Using AI in the justice system before it’s truly fair is like building a house on a cracked foundation. It might look strong, but sooner or later, people will get hurt. Let’s not be in a hurry to automate what should remain deeply human.


Thanks for reading. Abrazos.


Diego Rojas

 
 
 

Comments


WRITING + LIFE = MOVIES

  • alt.text.label.Instagram
  • alt.text.label.LinkedIn

©2023 by Writing + Life = Movies. Proudly created with Wix.com

bottom of page