Just Algorithms: Using Science To Reduce Incarceration And Inform A Jurisprudence Of Risk

Author: Christopher Slogobin
Publisher: New York, NY: Cambridge University Press, 2021. 167p.
Reviewer: Aziz Z. Huq | September 2021

Even if the Black Lives Matter (BLM) movement has not achieved its “defund” ambition for municipal policing—and the evidence is decidedly mixed—it has plainly struck criminal justice scholarship like a bolt of lightning (Marcellin & Doyle, 2020). A scholarly work that brackets or marginalizes the distinctive scale and racialized character of American policing and the associated carceral state risks being taken less than seriously or ignored, even if it offers a contribution to a different, important conversation within the world of criminal justice.

As evidence of this re-orienting change, consider the most recent book by the eminent criminal justice scholar Christopher Slobogin. Professor Slobogin has been teaching and writing (now at Vanderbilt Law School) about diverse problems thrown up administrating the criminal justice, mental health, and juvenile justice systems for more than two and a half decades. He is author of six monographs, including deservedly respected volumes about risk, preventive detention, and privacy, as well as dozens of law review articles across many leading journals. At its core, his new book Just Algorithms is a continuation of themes addressed at length in much of that earlier work. But Slobogin frames the decision to use risk assessment instruments (RAIs) as a superior policy response to the “plague of mass incarceration” (p. 1). Consciously writing in light of BLM protests, he perceives a “consensus” among policymakers over the need to reduce the number of incarcerated persons, and proposes RAIs as a means to that end (p. 24). His contribution hence asks to be evaluated in relation to that larger social project, rather than more narrowly as an exploration of the design and implementation of RAIs.

I don’t think that Just Algorithms will persuade the many racial justice advocates and scholars who are skeptical of RAIs, and who would prefer to move directly toward decarceration (The Use of Pretrial “Risk Assessment” Instruments, n.d.). Indeed, for reasons I’ll get to below, I am not sure that Slobogin has persuaded himself when he postulates that RAIs can be an effective response to the political problem of racialized mass incarceration. More modestly, however, Slobogin’s book succeeds as a careful and illuminating discussion of the key technical questions presented by RAIs in current use today. In his best moments, Slobogin adumbrates crisply and lucidly the key distinctions between different forms of RAIs, isolates the most important accuracy-related parameters, and provides a powerful argument against allowing human overrides of machine judgment (which almost always generate higher error rates) (Huq, 2020).

Slobogin further outlines a synoptic reform project for criminal justice centered around a system of “preventive justice,” or “a modern form of indeterminate sentencing which relies heavily on parole boards informed by RAIs” (p. 122). Under this system, which echoes a proposal made by Norval Morris in the early 1970s, a state would adopt sentence ranges that are consistent with an offender’s desert, then rely on expert boards guided by RAIs to calibrate sentences more precisely (p. 131). Even if the larger normative ambition of the book as an answer to mass incarceration falls short, therefore, these more grounded and technical elements of Just Algorithm make up a valuable contribution to the scholarly literature.

The term “risk assessment instruments” covers a wide range of different technologies for making predictions of some kind with respect to violent or dangerous behavior. They are used now at various points in the pretrial, sentencing, and post-conviction supervision stages of the criminal justice process: At least four hundred are deployed on the ground worldwide (p. 37). There are many ways of slicing up this universe. Perhaps the most important focuses on the balance between quantitative data and unstructured human judgment. On one end of this spectrum are clinical risk assessments, which merely identify factors without directing how they should be combined. Structured professional judgment RAIs both list factors and indicate how they can be combined, but then leave decision-makers with discretion as to how to use this conclusion. Adjusted actuarial risk assessments create a final score, but allow for observations based on clinical models. Finally, stand-alone actuarial risk assessments do not allow for any post-hoc adjustment by their human user. As Slobogin observes, citing the pathmarking work of clinical psychologist Paul Meehl, the accuracy of any RAI suffers in practice from the overlay of human judgment. If RAIs are be to useful, therefore, they should be implemented without any back-end option for decision-makers to second-guess their results.

Another distinction concerns the technical specifications of the RAI. Much recent scholarly attention to RAIs is motivated by the use of sophisticated machine-learning instruments, rather than human calculation, to derive predictions from historical data. While the basic mathematical models used by these computational devices may not be distinctive (they are often akin to ordinary least squares regressions), they do offer advances on familiar tools because of the gains in accuracy that often can be obtained by crunching large pools of historical data. For example, in a much–remarked upon 2017 paper, the computer scientist Jon Kleinberg and his colleagues estimated that shifting from human to machine decision-making in one jurisdiction could reduce the rate of offending by those released pretrial on bail by up to 24.8%, with no change in pretrial detention rate, or else reduce the jail population by 42.0% with no increase in crime rate (Kleinberg et al., 2017). Slobogin has little to say about the differences between machine learning tools and their less computationally sophisticated precursors. He hence does not clarify whether, or to what extent, technological advances in prediction should change how RAIs are evaluated. Nor does he explore the institutional circumstances necessary to realize the accuracy gains Kleinberg et al. propose—all important questions that remain unanswered.

All RAIs, regardless of whether they rely on human or machine judgment to make predictions, raise a number of common technical questions. Slobogin’s most powerful contribution is his exploration of these. To begin with, all RAIs use “research about groups [to] predict whether a given individual will reoffend” (p. 43). This is what Slobogin usefully calls a “G2I” problem. This G2I problem raises a distinctive set of design questions, and one of the strengths of Just Algorithms is the precision and clarity with which these questions are set forth even where the underlying legal requirements are not clear.

For instance, the designer of a G2I instrument needs to settle on an outcome criterion, fix a probability threshold (or thresholds) beyond which action is recommended, decide on the length of time for which a prediction holds, and select the interventions on offer to mitigate risks. As Slobogin notes, the law is surprisingly vague on several of these key points. The Bail Reform Act of 1984, for example, allows pretrial detention if an arrestee poses “a danger to any person or the community” without defining this in probabilistic terms (p. 46).

There is also a great deal of uncertainty in law and in practice as to what counts as a risk of “dangerousness” or “violence.” The widely used COMPAS instrument hence defines recidivism in terms of any arrest, whether for a felony or a misdemeanor (p. 50). Given how many misdemeanors are really nonviolent crimes of poverty, Slobogin rightly notes, it is doubtful that an outcome variable defined in such terms is a useful public policy tool. Indeed, the fact that COMPAS is so widely adopted, indeed, might be an indicator that public authorities lack the technical insight or diligence to make sound choices when it comes to selecting RAIs. This would be a first strike against Slobogin’s argument that they could be used to tackle the problem of mass incarceration. With that in mind, I wish he had explored further the question whether it is indeed possible to obtain an accurate account of some normatively compelling metric of real-world violence, free of the distortions that plague data gathered by police.

Slobogin provides a careful and thorough treatment of technical concepts necessary for a more precise definition of accuracy. Conceptual tools such as calibration validity, category base rates, and discriminant validity provide fine-grained ways to evaluate the strength of a signal produced by an RAI. Readers without technical backgrounds will find his discussions of these concepts crisp and useful.

Yet I think there is some reason to be skeptical of Slobogin’s larger ambition to offer RAIs as a solution to the supervening problem of mass-incarceration. In his terms, “risk is the question” faced by any policy-makers seeking to shrink the carceral state, and so “well-validated RAIs should usually be the answer” (p. 83). This answer is embedded for Slobogin in a larger return to indeterminate sentencing, with a reinvigorated right to parole and substantially more modest opportunities for prosecutors to deploy plea bargaining as a platform for shaping criminal-justice policy outcomes (pp. 149–50). This approach, Slobogin concedes, is not new, so much as a circling back to a status quo ante before determinate sentencing organized around guideline frameworks.

I think there are a number of reasons, though, for thinking that RAIs will not play as large a role in criminal-justice reform as Slobogin hopes. To begin with, empirical work by Megan Stevenson on the introduction of RAIs in Kentucky suggests that it led to only a “trivial increase in pretrial release,” and this decayed over time as judges reverted to their old habits (Stevenson, 2018, p. 308). Slobogin urges that RAIs be mandatory, but it is not clear whether this tweak would have been enough to lead to a different outcome in Kentucky.

Moreover, Slobogin’s own account of the powerful cultural and political forces that have driven mass incarceration in the United States belie the idea that merely pointing to a technical solution for successful risk prediction will dissipate the powerful partisan forces and interest groups (such as police and prison unions) that benefit from, and hence defend, the status quo. Indeed, recent experience of widespread vaccine and mask skepticism, despite the continuing toll of COVID-19, suggests that Slobogin is far too optimistic about the power of science to broker political compromise.

Perhaps most importantly, Slobogin recognizes a number of fraught questions arising out of the racial effects of RAI. His answers, though, are not compelling. Even a well-designed RAI, for example, has to measure recidivism in terms of arrests or convictions. Either way, that data reflects the discretionary judgment of police about which suspects to pursue, which violent crimes to investigate, and when to drop the pursuit for evidence. Both at the level of arrests and clearance rates, numerous studies have identified racial disparities that are difficult to explain in any satisfactory way (Gase et al., 2016; Kelly, 2018). Aside from those distortions, accurate predictions of criminality, and especially violent criminality, are likely to track longstanding patterns of economic, residential, and labor-market segregation. A decision to rely on data that arises from and embodies these patterns is likely to result in the repetition of those patterns over time. That is, it will reproduce malign racial hierarchies.

To my mind, Slobogin’s response to concerns in this vein is not persuasive. He says curtly that “if…base rate criminality is unjust because of structural racism, risk assessment is not the central problem; rather, the entire criminal justice system” is inculpated in the malign reproduction of racial hierarchies (p. 92). To the extent that the concern is racial differences in the accuracy of predictions because of different arrest patterns, he suggests race-conscious calibration of an instrument (p. 94). But I suspect that race-conscious calibration would present constitutional and political questions that would be much more difficult to resolve than Slobogin allows. And his effort at a reductio that inculpates the “entire” criminal justice system misses the force of many BLM theorists: that the choice to address social problems with the criminal justice system, and not social support, assures the persistence of pernicious and unacceptable racial hierarchies. It is that threshold choice that needs reconsideration, not sidestepping.

Just Algorithms, in short, does not offer a compelling answer to the problem of mass, racialized incarceration. Its more modest, yet still worthwhile, contribution turns on the particulars of RIAs. This topic is complex enough that Slobogin’s contribution is to be welcomed. Indeed, one can imagine that had he been writing a few years ago, his argument would have been more tightly focused on the questions to which he has, both in this book and his earlier work, made such large contributions.


Gase, L.N., Glenn, B.A., Gomez, L.M., Kuo, T., Inkelas, M., & Ponce, N.A. 2016. “Understanding Racial and Ethnic Disparities in Arrest: The Role of Individual, Home, School, and Community Characteristics.” Race and Social Problems. 8: 296–312.

Huq, A.Z. 2020. “The Right to a Human Decision.” Virginia Law Review. 106(3): 611–88.

Kelly, K. (2018, July 25). Killings of Black People Lead to Arrests Less Often Than When Victims are White. The Washington Post. Retrieved from https://www.washingtonpost.com/graphics/2018/investigations/black-homicides-arrests/?utm_term=.13e18417a56d&eType=EmailBlastContent&eId=c6aa9d89-8008-46c6-8c0f-aeb80ab20d3a.

Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. 2017. “Human Decisions and Machine Predictions.” NBER Working Paper Series. Working Paper 23180. Retrieved from https://www.nber.org/system/files/working_papers/w23180/w23180.pdf.

Marcellin, C. and Doyle, L. (2020, Oct. 14). Four Months After Protests Peaked, Did Four Cities Keep Their Promises to Cut Police Funding?. Urban Institute. https://www.urban.org/urban-wire/four-months-after-protests-peaked-did-four-cities-keep-their-promises-cut-police-funding.

Stevenson, M. 2018. “Assessing Risk Assessment in Action.” Minnesota Law Review. 103: 303–84.

The Use of Pretrial “Risk Assessment” Instruments: A Shared Statement of Civil Rights Concerns. (n.d.). The Leadership Conference on Civil and Human Rights. Retrieved from http://civilrightsdocs.info/pdf/criminal-justice/Pretrial-Risk-Assessment-Full.pdf.

Aziz Z. Huq, Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School

Start typing and press Enter to search