Summary, Judgment

Legal Scholarship

Unhelpful Tips for Junior Scholars

Legal ScholarshipWilliam Baude

On Twitter, somebody asked for my tips for young scholars. There’s no particular reason to think my tips are reliable (google “survivorship bias”) and it may not be possible to implement these, but these really are what I think are the most important.

  1. Have lots of ideas. Many of them can be bad, probably most (see #3 below), but you need these or this whole thing is going to be miserable. Which brings us to ..

  2. Enjoy writing about them. Some people find working on research independently enjoyable and fun; others regard it as work that is necessary to get other things they regard as enjoyable and fun (like paychecks or promotions or power). You will be much more successful if you are in the first category, which Adam likes to call “research as consumption.”

  3. Fail fast. You need to start working on ideas to see if they are good ones, but you also need to abandon them if they are not. You’ll be a better scholar if you can get to the question whether to abandon them faster, rather than spending a year on a bad project, thus either wasting a lot of time or forcing you to delude yourself and deceive your readers into pretending it’s a good project.

  4. Have faith in your good ideas. A corollary of #3. Once you’ve decided an idea is worth pursuing, don’t try to dress it up as some other idea that you think is more important or more marketable or more interesting. Les Green put this point very well in a discussion of “bullshit” dissertation titles, such as (the made-up example): Agency, Structure, and Power: The Milk-Marketing Board of Ruritania, 2007-2009:

    “Never allow doctoral students to use subtitles. Either there is good reason to study three years of decisions of the Milk-Marketing Board or there isn’t. . . . If there is, they should have the courage of their convictions and make the subject their title. If there isn’t, do not allow them to waste their intellectual careers on trivia and then package it up in a bullshit title.”

  5. Say no. You shouldn’t say no to everything. You want to engage with others, both for their benefit and for yours. But if your research is successful you will have more invitations than you can accept. And time is scarce, so you can’t afford to give all of yours away. To implement this rule, I highly recommend the wise advice of Sarah Lawsky to have a “no buddy,” a trusted professional friend who reviews all of your new commitments before you accept them, and who has the power and duty to say “no” most of the time. Which brings us to….

  6. Have friends you can trust. You can have ideas and enjoy writing about them on your own. But to decide which ideas to abandon and which requests for your time you can meet, you need advice, and the best advice comes from people who know you well and whose judgment you can trust. You want to share drafts or sketches or ideas early enough that it’s not too late to abandon them. You want people who know what you are good at and not so good at. In other words, you need friends.

I call this advice unhelpful because I don’t know what to say about how to do some of these things if you don’t already. But I think they are what successful junior scholars truly need.

This Year's Writings

Legal ScholarshipWilliam Baude

I confess that this year felt like it got away from me, but looking back I am surprised to discover that I published four articles in 2019.

Two of them were pieces on originalist theory co-authored with Steve Sachs: Grounding Originalism, in the Originalism 3.0 edition of the Northwestern Law Review, and Originalism and the Law of the Past in the special originalism issue of the peer-reviewed Law and History Review.

I also published two solo-authored pieces. I recently blogged about the most recent one, The Unconstitutionality of Hugo Black (Texas Law Review).

The other, which I published at the start of the year was an article in the Stanford Law Review called Constitutional Liquidation. Liquidation is something I’ve been working on since I before I officially became a law professor — it’s an attempt to reconstruct a profound aspect of James Madison’s theory of constitutional law, as well as to provide a theory of constitutional precedent: that the practice of the government (not necessarily the courts) can “liquidate” the meaning of ambiguous parts of the Constitution when it is sufficiently deliberate, widespread, and settled. Here’s the abstract:

James Madison wrote that the Constitution’s meaning could be “liquidated” and settled by practice. But the term “liquidation” is not widely known, and its precise meaning is not understood. This Article attempts to rediscover the concept of constitutional liquidation, and thereby provide a way to ground and understand the role of historical practice in constitutional law.
Constitutional liquidation had three key elements. First, there had to be a textual indeterminacy. Clear provisions could not be liquidated, because practice could “expound” the Constitution, but could not “alter” it. Second, there had to be a course of deliberate practice. This required repeated decisions that reflected constitutional reasoning. Third, that course of practice had to result in a constitutional settlement. This settlement was marked by two related ideas: acquiescence by the dissenting side, and “the public sanction” – a real or imputed popular ratification.
While this Article does not provide a full account of liquidation’s legal status at or after the Founding, liquidation is deeply connected to shared constitutional values. It provides a structured way for understanding the practice of departmentalism. It is analogous to Founding-era precedent, and could provide a salutary improvement over the modern doctrine of stare decisis. It is consistent with the core arguments for adhering to tradition. And it is less susceptible to some of the key criticisms against the more capacious use of historical practice.

Besides these articles, I also finally published a short chapter, The Court or the Constitution? in a festschrift for the great Professor Larry Alexander. It’s a short piece, but I’m especially proud of the fact that the chapter merited this comment in James Allan’s review of the festschrift:

As for Baude, let me just say that it is very seldom indeed that I read someone get the better of an argument with Larry Alexander. In my view Baude does just that in his chapter, even after considering Alexander’s reply. Both are a treat, but Baude’s claim that Alexander cannot have both the cake of judicial supremacy while also eating the truth of originalism, convinced me, and I recommend the exchange to all readers.

Finally, I also co-authored two amicus briefs to the Supreme Court, arguing that they should grant certiorari to decide whether to reconsider the doctrine of qualified immunity. (Something I published an article about last year.) One, in a case called Doe v. Woodard, was denied this summer. The other, in a case called Baxter v. Bracey, has been repeatedly rescheduled and is still waiting for the Court to decide whether to grant it.

Published: The Unconstitutionality of Justice Black

Legal ScholarshipWilliam Baude

My latest article was just published in the Texas Law Review, and it is called “The Unconstitutionality of Justice Black.” I originally gave it the accurate but completely uninteresting title “Ex Parte Levitt,” the name of the too-widely-forgotten case that inspired the article.

The article is about the constitutional controversy over the appointment of Justice Black. The day that Black was sworn in to the Supreme Court in 1937, an apparent crank tried to orally argue that Black was an unconstitutional usurper. The Court dismissed the case on procedural grounds.

But it turns out that the crank was correct, and might not really have been a crank. Justice Black was unconstitutionally appointed, and while the suit might have had some procedural problems, they weren’t exactly the problems that the Court thought they were.

The piece also discusses the aftermath of the litigation. As you may know, Justice Black sat on the bench for many decades. But during all that time, the Court never actually ruled on the lawfulness of Justice Black’s appointment. Instead, after a while everybody just took it for granted anyway.

As I’ve blogged before elsewhere, I’m generally a fan of Justice Black’s work, so I feel a little sheepish about publishing the piece. But I’ve become convinced that his appointment was unconstitutional. You can read the whole thing (only 30 pages) if you want to see why.

Reforming the Academic Publication Process Should be a First Order Priority

Legal ScholarshipAdam Chilton
Picture1.png

Anup Malani and I hosted a conference last week in New Delhi on finding ways to “Improve the Lives of India’s Urban Poor.” One of the presenters at the conference, Neelanjan Sircar, motivated his presentation with the above slide illustrating India’s disturbingly low female labor force participation (FLFP). As India has gotten richer, women have dropped out of the labor force at a rate higher than other South Asian countries. Neelanjan persuasively argued that understanding the drivers of this trend has massive welfare consequences, and given the size of India, is a question of first order importance.

Of course, a conference on urban poverty in India, FLFP isn’t the only problem of first order importance that came up. After all, India is facing dire consequences caused by climate change; there are frequent cases of violence against women and religious minorities; and there are still hundreds of millions of people in extreme poverty.

But here’s the most persuasive argument I heard last week about what problem should be the highest priority for researchers to tackle: making the academic publication process more efficient. Anup was the person that made this argument, and he was primarily talking about the peer review process in social science. Here’s the case he made.

Right now it frequently takes years for a paper to make it through the peer review process in political science, economics, or law & economics. Although this process sometimes makes the paper better, often it is just years of lateral moves. For instance, if you submit to Journal A, a reviewer may suggest to switch piece of evidence X with piece of evidence Y. If you’re rejected from Journal A, it’s reasonable to take a few months making the changes that the reviewers suggested before you submit to Journal B. After you get the referee reports back from Journal B months later, it’s all too common for a different reviewer to say it makes no sense to use piece of evidence Y instead of X. And then the process repeats.

The result is that authors spend years making tweaks on the same paper instead of starting new projects. If you believe that there is social value to the production of knowledge, this inefficient process is a massive social loss. But it may be the most important problem for academia to tackle, because whatever you think is the most important questions for researchers to study, they could be doing dramatically more of it if we could speed up the peer review process and allow them to move on to new projects. Think Neelanjan is right that FLFP in the developing world is a high priority problem to address? Fix the publication system so that researchers like Neelanjan can write twice as many papers instead of spending their time tweaking existing ones.

This brings me to the AALS proposals to reform the law review publication process that Will blogged about yesterday. The big advantage of the law review publication process is its speed. Established authors can submit their papers to dozens of law reviews in February and know with an extremely high degree of confidence that one of the journals will accept their paper by April. It would be a shame for any law review reform to give up this advantage over peer reviewed journals (although, making the process a month or two longer wouldn’t be a huge deal).

But the big drawback with the law review process is that the editors making publication decisions do not have any expertise in the subject matter of the articles specifically, or a good sense of what makes for a good article more generally. Peer reviewed journals typically don’t let second year graduate students review papers, and those students spend their time taking seminars where academic articles are read and debated. The view that a second year law student could assess the relative merits of papers after a year and a half of classes where they mostly read appellate cases is simply not credible.

The result is that law review placement is not a meaningful signal of article quality. It’s true that articles in the “Prestigious Law Journal” might be better on average than the “Low Ranked Journal”, but there is so much noise in the placement that it’s tough to look at a law professors list of publications and know much of anything other than how frequently they like to submit papers.

This creates its own kind of inefficiencies. It is most difficult for lower placed professors to successfully lateral when they do not have the advantage of their articles having a shot at top journals through blind peer review. The result is that the law professors that are best at producing research have difficultly moving to schools with better resources that would, in turn, allow them to produce more scholarship. Like the peer review process, the law review process is thus unnecessarily reducing the production of knowledge.

The only way to fix this problem is to introduce some form of peer review. Which is why it is nice to see the AALS proposals include a section on a possible peer review pool. Many of their ideas are sensible. But I would go further.

For one, I would require any author that submits an article to write referee reports for three other articles. Only after the author writes three reviews, and their paper has been reviewed three times, would the article be released to journals. Additionally, I would only allow law reviews access to the pool of peer reviews if they commit to not publishing articles that have not gone through the peer review process. The list of journals that have made that commitment would be posted online, and sticklers like me would know to not take any ones placements seriously if they publish in journals that have not made the commitment.

The AALS proposal had a versions of both these ideas, but it did not make them requirements or a cornerstone of their plan. These requirements, combined with some of proposed reforms (for instance, the requirement to take the first placement offered) could help to improve the signal in article placement without having law schools take on all of the problems of full peer review models.

Do Law Journals Need Real Reform?

Legal ScholarshipWilliam Baude

Brian Galle has posted a discussion draft of A Proposal for Law Journal Reform, which is a project of The AALS Section on Scholarship, Advisory Committee on Law Journal Reform. Here is the introduction:

No one is satisfied with today’s legal publishing. The long-standing tradition of simultaneous submission to student-edited journals has always involved tradeoffs, but the costs of that approach have grown dramatically over the last decade. Where once even top journals faced a relatively manageable task in identifying promising submissions, technological innovation now enables authors to easily submit to hundreds of journals with a few clicks. The result has been enormous practical and even ethical pressures on students and authors. Top journals receive more than 4,000 submissions annually. Selection outcomes are often driven not by merit but by insider knowledge, such as whether an author knows when journals are open to selecting articles or how to “expedite” publication offers to more-preferred journals. Increasingly, top journals are demanding exclusive submission windows, undermining one of the core strengths of the traditional structure. With few clear rules of the road, opportunities for gamesmanship on each “side” are prevalent, and may be mutually reinforcing.

While we believe that legal academia can and should agree on “best practices” to improve how authors and editors conduct themselves, we are realists. No set of idealized norms can succeed in the face of enormous structural pressures. Fundamental reforms are necessary.

Thus, the Section offers two possible paths for reform, each of which can be further tailored. In the simpler path, authors will submit to a small number of journals at a time, and must accept the first offer received. Journals will not extend offers during a “quiet period” of four weeks or so. A more ambitious path involves adoption of a two-round Shapley matching system, better known as the “med school” match. In that path, authors will rank a set of journals from which they would accept offers, and journals will rank those articles that meet their publication threshold. Both paths can be combined with a new peer review pool, as we describe, and additionally AALS Member Schools can adopt and encourage compliance with a set of complementary best practices for authors and editors.

Though we detail the strengths and potential weaknesses of these options in more detail below, we want to emphasize here their overwhelming advantage over the status quo: each would essentially eliminate expedited review. Expedited review is the root cause of nearly all the problems we and other stakeholders have identified with the current approach. It motivates mass submissions and other, even less fortunate, gaming behaviors. It turns many journals into screening editors for journals that are more preferred by authors, greatly increasing both their workloads and frustrations. The time pressures it imposes make meaningful peer review next to impossible. And it systematically rewards authors who are most expert at navigating the system.

An alternative, of course, would be to turn to the exclusive-submission model common in other academic disciplines. We believe that would be too radical a step. It would greatly extend time to acceptance for most authors without alleviating the crushing workload of top-journal editors. Further, many outstanding law journals — although not enough, in our view — already operate under the traditional exclusive-submission/peer-review model of the social sciences. We believe that preserving both paths is important for the discipline.

I know that law professors love to complain about law journals and the law journal process, and I know that my own experiences surely bias my assessment of the system, but my view is that law reviews are not that bad. (As I’ve written before, most law review articles are not good, but I’m not convinced this is anything other than an application of Sturgeon’s Law — “90% of everything is crap.”)

In particular, the proposal focuses on two problems with the current system. The first is that prestige/quality sorting is imperfect, so the best articles are not always published in the most prestigious journals. This is surely true, but I’m not sure how true it is. When I come across new articles I notice a pretty consistent correlation between better articles and better journals — it’s far from perfect, but I think there’s a marked correlation. And so the question is how much the correlation is likely to increase from these new systems. I don’t think we know that, especially if we don’t know what the current correlation coefficient is, or what the causes are.

The second problem is that law review editors have too many articles to read, and therefore spend too little time on most of the articles. I take this problem more seriously, since it (according to the proposal) reflects the consensus view of law review editors. But there are good reasons for the systems we have, and so it’s hard to come up with a superior one that is likely to get any traction.

That leads me to wonder if both of the proposed solutions are too ambitious, since they require a lot of journals and authors to buy into the new regime. What if, instead, we focused on disclosure and promise enforcement on the part of the authors? What if we simply required authors to disclose how many other journals they were currently submitting to? Journals could focus their efforts, if they wanted to, on the authors who were not broadly playing the field. And what if we also allowed authors to promise to accept an offer if they received it? Journals could focus their efforts on these sure-yield articles if they wanted to.

Allowing both of these options would effectively let authors and journals opt in to one of the AALS-proposed systems — in which authors submit only to a small batch of journals and promise to accept any of them. But it would also make it possible to make marginal moves towards that equilibrium without requiring everybody to move at once. And it would leave journals and authors free to make their own judgments, which would let us find out how strong the demand for the proposed equilibrium really is.

"Chocolate Accelerates Weight Loss!" Or, why to worry about randomization.

Legal ScholarshipAdam Chilton

In 2008, a program was launched called “Secure Communities” that sent information about anyone arrested by local police departments to the Federal Government so that their immigration status could be checked. The program had been billed as an effort to “secure” our communities by increasing immigration enforcement, and in turn, reducing crime.

In a 2014 paper, Adam Cox and Tom Miles examined whether the Secure Communities program actually reduced crime. The paper leveraged the fact that the program was rolled out gradually county by county to test its effects. The paper made a big splash because it found that increased focus on deportations wasn’t accomplishing much.

Cox and Miles’ paper wasn’t just substantively important — the research design has become influential too. Why? Their paper showed that the roll out was haphazard in a way that made it a quasi-random source of county-level variation in government policy. This kind of variation is what makes causal inference, and thus publication in peer reviewed journals, possible.

So, naturally, other scholars started using the roll out of Secure Communities to study other topics. For instance, Ariel White wrote a paper looking at the effect of Secure Communities on Latino vote turnout; Marcella Alsan and Crystal Yang looked at the effect of Secure Communities on take up of social insurance programs; Nest at al. explored the effect of Secure Communities on employment patterns for low-education native workers; and Dee and Murphy looked at the effect of Secure Communities on school enrollment. The research design was even used by Hines and Peri to study the effect of Secure Communities on crime (which, if you’re thinking that sounds an awful lot like the original Cox and Miles paper, you’d be right).

Why am I bringing this up? The Regulatory Review has been running a series of essays about the very sensible idea of trying to encourage the government to incorporate more randomization into their policy implementation. The hope is that by randomizing—like the way that the roll out of Secure Communities was staggered—it will be possible for scholars to evaluate the effect of programs in a rigorous way.

In general, I’m totally on board with this idea. Randomization makes it possible to do causal inference, and causal inference makes it possible to know if policies are working. But we do need to be worried that the proliferation of studies that will follow will start to produce bogus results. Here’s why.

As I explained in my essay for the Regulatory Review series , when researchers look for the effect of of a policy in a lot of places, it runs the risk of a problem called Multiple Hypothesis Testing (“MHT”). The concern with MHT is that statistically significant results happen randomly 5% of the time, so if we look at the effect of an intervention 20 times, we’re likely to find 1 bogus result.

My favorite example of this is the chocolate weight-loss hoax. To prove that newspapers will publish anything scientific sounding without thinking, a scientist/journalist conducted a study where people were randomly assigned to eat chocolate. The researchers then measured 18 outcomes for the people in the study. The study, predictably, found that one of the 18 variables was statistically significant thanks to random chance. An academic paper was published in a “predatory” journal based on the study, and newspapers around the world published stories about the finding with headlines like “Chocolate Accelerates Weight Loss”.

What does this problem have to do with government randomizing policy? The worry is that researchers are drawn to randomized policy interventions like moths to a flame. So when policies are randomized, people study them from every possible angle. And a lot of people looking for outcomes from the same intervention means we are naturally going to start getting some results due to the multiple hypotheses testing problem.

For instance, if studies keep looking for the effect of Secure Communities in more and more places, some of the results are going to be bogus. Not because the researchers are being nefarious, but just because of random chance.

———

If you’re interested in the topic, check out my essay and the rest of the series being published by the Regulatory Review. And shout out to Colleen Chien for writing the essay that inspired the series and inviting me to contribute. Thanks Colleen!

Citation Rankings and the Human Touch

Legal ScholarshipWilliam Baude

I was pondering Adam’s post last Friday about measurement error in law school rankings and then I thought about his posts earlier in the week about human v. computer judges and referees. I wonder if those latter posts provide the best approach to the citation/rankings problem.

Given the imperfections and transparency of citation rankings, they will be gamed in troubling ways. But they still provide important objective evidence that is missing from the current rankings system. Maybe the solution is this: Give the faculty citation counts to some humans, and ask them to use the citation counts to decide scholarly ranking. We could do this with the current survey group for scholarly reputation at US News, or we could do it with a different group of people if we trusted them more for some reason.

The advantages are obvious. The human beings could average, generalize, or combine across multiple rankings systems, and could take into account . They could make some of the tradeoffs Adam describes between junior and senior faculty. And they’d make it harder to game the rankings, because they’d be able to adjust for apparently strategic behavior.

Of course, the problem is that the humans probably wouldn’t be objective enough, and that plenty of humans probably don’t agree that citation counts are all that relevant to scholarly quality, so they might refuse to cooperate in the project. Still, just like asking judges to use data to assign sentences, it might be the best we can do.

Citation Rankings and Measurement Error

Legal ScholarshipAdam Chilton

There’s been a lot of recent debate about ranking law schools based on their faculties’ citations. The U.S. News and World Report has announced plans to incorporate citations into their overall ranking, and Paul Heald and Ted Sichelman have just released a new paper providing exactly that kind of ranking.

Both of these rankings rely on citation counts from HeinOnline. (note: Heald and Sichelman also use SSRN downloads in their rankings.) As many have pointed out, relying on HeinOnline does not measure all of law professors’ citations. Instead, it measures citations to articles published in HeinOnline by other articles published in HeinOnline. If an article published by the Fancy Law Review is cited 100 times by articles published by the Prestigious Law Journal, this isn’t a problem. HeinOnline would pick up all 100 citations. And because most law professors publish most of their scholarship in law reviews carried by HeinOnline, this isn’t a problem most of the time.

But it is a problem some of the time. For instance, if a law professor publishes a book that receives 100 citations, HeinOnline would not pick up any of them. So law schools that have relatively more professors writing books are going to be lower ranked than they should be just because of how the citations are measured for the new rankings. In other words, the proposed new rankings have measurement error.

Of course, measurement error is a reality for anyone working with data, and normally researchers typically don’t get bent out of shape about it. This is because measurement error that is random might lead to distortions, but it’s not going to lead to systematic problems. And when the measurement error is non-random, researchers can just explain to readers the ways that the error is going to bias their results.

But there are a lot of researchers getting bent out of shape about the measurement error in the potential U.S. News and World Report rankings. And I’m one of them. This is because non-random measurement error in rankings creates the potential for gamesmanship. If rankings systematically undercount the work of people that publish in books or in journals that are not indexed by HeinOnline, there will be less of a market to hire these scholars.

This problem is exacerbated by the fact that so many aspects of U.S. News and World Report rankings are extremely sticky. Law school deans can’t snap their fingers and change the median LSAT scores and GPAs of the students that attend their schols. These things move very slowly over time. But they can try to hire scholars with more HeinOnline citations at the margins. The result is that non-random measurement errors in rankings will transalte into distortions of the academic labor market. This will in turn distort our core mission: the production and dissemination of knowledge.

If you care about the ranking debates, Jonathan Masur and I recently posted a short paper on SSRN where we explain this concern and lay out a few more. You should also check out Paul and Ted’s own paper where they explain the numerous steps they’ve already taken to reduce measurement error, and laid our their plans to reduce it even further in the near future. And, although I’ve got concerns about current measurement error in citation rankings, I want to end by saying Paul and Ted are being extremely thoughtful about how to produce rankings as transparently and accurately as possible.

How Being a Law Professor Ruined Watching Professional Sports for Me

Legal ScholarshipWilliam Baude

Adam’s post about the differences between umpires and referees reminds me of a provocative article by Mitch Berman, “Let ‘em Play” A Study in the Jurisprudence of Sport, as well as these two recent blog posts by Dave Pozen, What Are The Rules of Soccer?, and The Rulification of Penalty Kicks. These pieces all explore the gap between the “law in the books” and the “law in action” in certain professional sports. Though the rules don’t say so, many of us expect the officials to systematically deviate from or underenforce the rules under certain conditions. The analogy to law is natural.

I hate to be a spoilsport, but reading these articles helped me understand what I always found so frustrating about watching, say, the NBA finals. I love basketball, and I live in Chicago, but I still think Michael Jordan should have been held to the same number of steps as everybody else on the court. And I don’t like watching the rules get suspended during the tense final quarter. I think we all agree that the players, not the refs, should be the center of the action during the climax of a game. But in my view by deviating from the duly promulgated rules, even to avoid enforcing them, the refs make their own judgment all too central.

Of course, these views probably will not surprise people who know me, since I am a formalist when it comes to judicial interpretation too. And it seems like I’m in a decided minority. But we shouldn’t take the system of referee discretion for granted.

Judges as Referees

Legal ScholarshipAdam Chilton

Several recent papers have found that algorithms are better at predicting human behavior than judges. In one high profile example, Kleinberg et al. used an algorithm to re-evaluate decisions to grant defendants pre-trial release made by judges in New York City from 2008 to 2013. They showed that an algorithm given variables about “characteristics of the defendant’s current case, their prior criminal record, and age (but not other demographic features like race, ethnicity, or gender)” could dramatically out-perform the actual decisions made by judges. As the authors put it, relying on their algorithm could have produced “crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates.” 

Their finding is sufficiently thought provoking that Malcom Gladwell’s used it as a motivating example for his new book, Talking to Strangers. The relevance to Gladwell’s argument is that the judges have access to all the information the researcher input into the algorithm, but they also are able to look the defendant in the eye when assessing their character. But even though the judges have access to more information, the humans are just systematically worse at decisionmaking than computers.

This pessimistic evidence about the quality of judicial decisionmaking reminds me of John Robert’s analogy of judges as umpires. If what we are after is calling balls and strikes, it can now be done more accurately, quickly, and cheaply by computers than umpires.

But a paper released yesterday suggests that maybe judges are better thought of as referees than umpires. Megan Stevenson and Jennifer Doleac’s paper examines how judges in Virginia that were given algorithmic risk assessment scores changed the way they made sentencing decisions. They found that judges’ decisions were influenced by the information: judges gave defendants with higher risk scores longer sentences and defendants with lower risk scores shorter sentences.

However, the judges’ deviated from the risk scores in an important way. Despite high risks of recidivism, the judges systematically gave young defendants more lenient sentences. This deviation leads Stevenson and Doleac to persuasively conclude that the judges’ have goals other than just predicting recidivism in mind when they are making predictions.

This makes judges seem more like basketball referees than baseball umpires. When refs are calling a basketball game, most fans are open to the idea that the refs might call the game differently depending on the circumstances. When the stakes are high—at the end of the game, in the playoffs—we’re often cool with the umpires giving the players more leeway. If this is the right analogy, maybe it’s a little unfair to say that judges aren’t doing a great job of calling balls and strikes when they are actually playing a different game.