Courts have a duty to both individuals and the public at large. It is imperative that courts balance their need to convict the offender of a crime and enforce an appropriate punishment alongside the rights of the individual and the absolute obligation to uphold justice.
Courts have, throughout history, been found to have convicted the wrong person, dole out punishments unfairly, and discriminate against accused individuals for a plethora of reasons. Artificial intelligence must be trained by non-biased information, lest the network develop its own internal bias.
According to the Pittsburgh Post-Gazette, a Cleveland-based court hearing involved the use of artificial intelligence in determining the risk of the accused, Hercules Shepherd Jr. Judge Jimmy Jackson Jr. was given an AI-generated risk assessment of Shepherd, based on his records. Shepherd was assigned a risk rating of:
Two out of six for likelihood of committing another crime. One out of six for likelihood of skipping court.
Following the network response, the court was asked to take the readings into account when determining the sentence for Shepherd. Judge Jimmy Jackson Jr. did just this, setting a low bail and allowing Shepherd to go free until his court date.
What factors were considered in the risk assessment?
The risk assessment was primarily based on the information available in public and criminal records for Shepherd. Some other factors included reason for arrest, criminal history, the progress of others in a situation similar to Shepherd.
Due to the variability of human response, however, some outlets have speculated that having a bystander guess an outcome for a dollar may be equally, if not more accurate.
What was Shepherd arrested for?
Ohio.com reports that Shepherd was arrested for possession of a small amount of cocaine. Drug charges have widely variable levels of accountability and punishment. Drug charges can result in anything from community service to life in prison. In this circumstance, though, it remains to be seen how the post-arraignment hearing will go.
What are some potential issues with relying on current AI for sentencing?
In this hearing, the algorithm was used to determine which risk factors to assess, the prevalence of the factors after they have been determined, and score each of the values as they pertain to the accused. Ultimately the risk assessment abilities of this algorithm could lead the world’s courts into a new era of understanding.
There are, however, some potential pitfalls associated with similar algorithms. Some networks have previously been identified as racist, sexist, and all around not ideal for this purpose. While ideally, the data sets used to train an algorithm in this realm would be entirely non-biased, the fact remains that some verdicts have been skewed.