Monday, August 8, 2016

High-tech and the courtroom series: Part Five: Algorithms Versus Judges: (Are judges any less biased in the risk assessment process? Who knows? HL)..."I am not aware if we have any research on the comparison of judges who do and don’t have access to the scores"... “I often wonder if it’s because part of the rationale for using automated methods is ease of use and speed, and having to do elaborate studies on their efficacy defeats the purpose,” said Venkatasubramanian. It takes time and money to do studies like this, and Abrams argues that there’s little pushing states and law enforcement agencies to spend that time and money validating these things. “The incentives aren’t very powerful here to get things right,” he said. "It’s really rare that judges ever get called to account for individual cases or sentencing patterns. Judicial retention is 98 percent in Cook County, there’s just not a ton to fear.” And when you think about who these technologies harm, it’s generally the most marginalized. Who is going to call out a biased sentencing pattern and call for more research to compare judges? It’s likely not the black folks being more harshly sentenced by judges. So we know that these algorithms can be, and often are, biased. But we don’t know how that bias actually impacts the sentencing decisions of judges compared with those who don’t use them. And that’s a huge question that should be answered before these scores become a default part of the American courtroom." Writer Rose Eveleth; Motherboard. Vice;


STORY: "Does crime-predicting software bias judges? Unfortunately, there's no data," by Rose Eveleth, published by Motherboard on July 18, 2016.

SUB-HEADING; "I am not aware if we have any research on the comparison of judges who do and don’t have access to the scores."

GIST: "For centuries judges have had to make guesses about the people in front of them. Will this person commit a crime again? Or is this punishment enough to deter them? Do they have the support they need at home to stay safe and healthy and away from crime? Or will they be thrust back into a situation that drives them to their old ways? Ultimately, judges have to guess. But recently, judges in states including California and Florida have been given a new piece of information to aid in that guess work: a “risk assessment score” determined by an algorithm. These algorithms take a whole suite of variables into account, and spit out a number (usually between 1 and 10) that estimates the risk that the person in question will wind up back in jail. If you’ve read this column before, you probably know where this is going. Algorithms aren’t unbiased, and a recent ProPublica investigation suggests what researchers have long been worried about: that these algorithms might contain latent racial prejudice. According to ProPublica’s evaluation of a particular scoring method called the COMPAS system, which was created by a company called Northpointe, people of color are more likely to get higher scores than white people for essentially the same crimes. Bias against folks of color isn’t a new phenomenon in the judicial system. (This might be the understatement of the year.) There’s a huge body of research that shows that judges, like all humans, are biased. Plenty of studies have shown that for the same crime, judges are more likely to sentence a black person more harshly than a white person. It’s important to question biases of all kinds, both human and algorithmic, but it’s also important to question them in relation to one another. And nobody has done that. I’ve been doing some research of my own into these recidivism algorithms, and when I read the ProPublica story, I came out with the same question I’ve had since I started looking into this: these algorithms are likely biased against people of color. But so are judges. So how do they compare? How does the bias present in humans stack up against the bias programmed into algorithms? This shouldn’t be hard to find out: ideally you would divide judges in a single county in half, and give one half access to a scoring system, and have the other half carry on as usual. If you don't want to A/B test within a county—and there are some questions about whether that’s an ethical thing to do—then simply compare two counties with similar crime rates, in which one county uses rating systems and the other doesn’t. In either case, it's essential to test whether these algorithmic recidivism scores exacerbate, reduce, or otherwise change existing bias. Most of the stories I’ve read about these sentencing algorithms don’t mention any such studies. But I assumed that they existed, they just didn’t make the cut in editing. I was wrong. As far as I can find, and according to everybody I’ve talked to in the field, nobody has done this work, or anything like it. These scores are being used by judges to help them sentence defendants and nobody knows whether the scores exacerbate existing racial bias or not. “I am not aware if we have any research on the comparison of judges who do and don’t have access to the scores,” Kris Hoy, the marketing director of Northpointe, told me.........All the researchers I talked to who study sentencing, risk assessment and these algorithms said they didn’t know of a single study that compared the sentencing patterns judges who do and don’t use these scores. There are studies out there on a variety of risk-assessment tools that look at questions of accuracy and reliability. There are plenty of studies that compare the algorithms’ guesses about recidivism with who really did return to jail. But there’s nothing that compares judges with and without the scores. Which means that states are using these scores in a variety of contexts without having any idea how they might impact decisions that impact people’s lives.........David Abrams, an economist who’s studied the racial bias in courtrooms, agreed. “If I were making a decision whether to adopt it, I’d want to see some studies of this type already done,” he said. So why haven’t these studies been done? There are several possible explanations. “I often wonder if it’s because part of the rationale for using automated methods is ease of use and speed, and having to do elaborate studies on their efficacy defeats the purpose,” said Venkatasubramanian. It takes time and money to do studies like this, and Abrams argues that there’s little pushing states and law enforcement agencies to spend that time and money validating these things. “The incentives aren’t very powerful here to get things right,” he said. "It’s really rare that judges ever get called to account for individual cases or sentencing patterns. Judicial retention is 98 percent in Cook County, there’s just not a ton to fear.” And when you think about who these technologies harm, it’s generally the most marginalized. Who is going to call out a biased sentencing pattern and call for more research to compare judges? It’s likely not the black folks being more harshly sentenced by judges. So we know that these algorithms can be, and often are, biased. But we don’t know how that bias actually impacts the sentencing decisions of judges compared with those who don’t use them. And that’s a huge question that should be answered before these scores become a default part of the American courtroom."

The  entire article can be found at:

http://motherboard.vice.com/read/does-crime-predicting-software-bias-judges-unfortunately-theres-no-data
  
PUBLISHER'S NOTE:

I have added a search box for content in this blog which now encompasses several thousand posts. The search box is located  near the bottom of the screen just above the list of links. I am confident that this powerful search tool provided by "Blogger" will help our readers and myself get more out of the site.

The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at:

http://www.thestar.com/topic/charlessmith

Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html

Please send any comments or information on other cases and issues of interest to the readers of this blog to:

hlevy15@gmail.com;

Harold Levy;