Communiqué

Spectrum Logo

Artificial Intelligence: Is it Biased in Law Enforcement & Court Usage?


Posted on:

< < Back to

Artificial Intelligence quickly is becoming a greater part of our lives. Algorithms already trace our digital footprints and routinely send us targeted advertising and social media content compatible with our views.
AI checks our credit scores and approves/disapproves us for loans and mortgages.
It also is being used to predict behaviors – especially by law enforcement and criminal justice systems. But, is it biased and does it racially profile?
Randy Rieland, is an award-winning journalist and a digital media strategist in Washington DC. He also writes about innovation for Smithsonian.com. He recently wrote about how AI is used by some law enforcement. https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/
“A program called PredPol was created eight years ago by UCLA scientists working with the Los Angeles Police Department, with the goal of seeing how scientific analysis of crime data could help spot patterns of criminal behavior,” Rieland wrote. “Now used by more than 60 police departments around the country, PredPol identifies areas in a neighborhood where serious crimes are more likely to occur during a particular period.”
The program, however, is not without controversy. Some notable groups like the American Civil Liberties Union (ACLU) and the Brennan Center for Justice question whether the data used and the secret algorithms in the software create bias – especially against minorities and minority neighborhoods.
There are also questions whether the data and the resultant AI spurs law enforcement officers to be more aggressive in their arrest policies in certain neighborhoods. Some argue the AI programs create a type of racial profiling.
There is little accountability, at this time, for companies that produce AI systems because the software and the algorithms are “proprietary” and secret to the company.
Judges also have used AI to determine whether a convicted defendant is likely to commit more crimes. In short, judges were using AI in sentencing determinations. In 2016, a ProPublica investigation said that the system used by the judges was “biased against minorities.” The AI company objected to that conclusion.
There is almost no transparency in the development and applications of AI systems. Until they can be checked by the public and interest groups, the debate over their fairness and biases will likely continue.