Algorithmic risk assessment is hailed as offering criminal justice officials a science-led system to triage offender populations to better manage low- versus high-risk individuals. Risk algorithms have reached the pretrial world as a best practices method to aid in reforms to reduce reliance upon money bail and to moderate pretrial detention’s material contribution to mass incarceration. Still, these promises are elusive if algorithmic tools are unable to achieve sufficient accurate rates in predicting criminal justice failure. This article presents an empirical study of the most popular pretrial risk tool used in the United States. Developers promote the Public Safety Assessment (PSA) as a national tool. Little information is known about the PSA’s developmental methodologies or performance statistics. The dearth of intelligence is alarming as the tool is being used in high-stakes decisions as to whether to detain individuals who have not yet been convicted of any crime. This study uncovers evidence of performance accuracy using a variety of validity metrics and, as a novel contribution, investigates the use of the tool in three diverse jurisdictions to evaluate how well the tool generalizes in real-world settings. Policy implications of the findings may be enlightening to officials, practitioners, and other stakeholders interested in pretrial justice as well as in the use of algorithmic risk across criminal justice decision points.

This content is only available via PDF.