Which interpretive tasks should be delegated to machines? This question has become a focal point of “tech governance” debates. One familiar answer is that while machines are capable of implementing tasks whose ends are uncontroversial, machine delegation is inappropriate for tasks that elude human consensus. After all, if human experts cannot agree about the nature of a task, what hope is there for machines?

Here, we turn this position around. When humans disagree about the nature of a task, that should be prima facie grounds for machine delegation, not against it. The reason has to do with fairness: affected parties should be able to predict the outcomes of particular cases. Indeterminate decision-making environments—those in which human disagree about ends—are inherently unpredictable in that, for any given case, the distribution of likely outcomes will depend on a specific decision maker’s view of the relevant end. This injects an irreducible dynamic of randomization into the decision-making process from the perspective of non-repeat players. To the extent machine decisions aggregate across disparate views of a task’s relevant ends, they promise improvement on this specific dimension of predictability. Whatever the other virtues and drawbacks of machine decision-making, this gain should be recognized and factored into governance.

The essay has two parts. In the first, we draw a distinction between determinacy and certainty as epistemic properties and fashioning a taxonomy of decision types. In the second part, we bring the formal point alive through a case study of criminal sentencing.

This content is only available via PDF.