Law enforcement relies on artificial intelligence (AI) in making determinations related to investigating, charging, sentencing, and even releasing offenders across the country. More and more, AI algorithms are incredibly complex and their pathways are secretive. That has led to the use of opaque technologies, otherwise known as black boxes.
What are We Talking About?
In the criminal justice system, black box technologies use technical algorithms to interpret facial recognition, mixes involving multiple persons’ DNA, and even risk assessments related to the potential for recidivism. It’s a complicated science that is not really understood by most people but that nonetheless has a strong influence on the way people think about evidence both in and outside of a court of law. Investigators, judges, juries, and policymakers all rely pretty heavily on deductions arrived at through black box technologies. This is problematic because that reliance is rooted in secretive and complex science that can easily be misunderstood and is even downright wrong at times.
Regulation Issues
The use of these black-box AI technologies has come upon us so quickly that the legal system has been unable to keep up with regulations in order to protect the rights of individuals pitted against the algorithms. The technologies are top-secret, sometimes by design and sometimes due to corporate cloak-and-dagger moves. When civil rights violations are alleged, it has so far been nearly impossible to make a case simply because the technology is so opaque. That fact, too, has made regulation difficult to date.
Protecting the Secrets
Many judges both believe in and trust these technologies and, in fact, protect them from closer examination. One case involved a challenge to DNA technology and a request that independent evaluators review it. The judge, however, refused the defense challenge and inspection proposal on the grounds that the company would not be able to market its technology if it were more transparent.
Can it Be Relied on?
These technologies can absolutely be trusted to come to fair and accurate conclusions according to the corporations who market and run them. But because they make money based on getting favorable results, and no one else is allowed to take a closer look at the technologies, how trustworthy is it really? It is a question that literally has no answer.
AI explanations do not always stick to a model’s calculations, either. In fact, many explainable methods disagree with one another, meaning some of the explanations must necessarily be wrong, making all of the conclusions invalid. Nonetheless, advocates of the technologies argue that some mistakes are worth stomaching when weighed against the accuracy provided in other cases. Really? And when do we know a mistake has occurred versus when the conclusions were accurate? Is it okay to sacrifice the constitutional rights of some because there’s a chance others will be protected? Is that really an acceptable argument? Are we, as a society, okay with decisions impacting life and liberty being locked up in a black box that no one outside of corporate biggies understands?
Case in Point
In one case, a medical examiner’s testimony based on genotyping software was challenged, and multiple concerns were disentangled as the court took a closer look at the accuracy of the conclusions. Later, another judge ruled that relying on this kind of evidence is a mistake and even advocated a review of convictions based on AI black box technology because independent experts have not been able to examine the technology and corroborate conclusions. That judge estimated that when there are four or more DNA samples being evaluated, black box technologies are likely wrong more than half the time. Continue reading