Addressing Discrimination in Prediction Policy Problems
The growing availability of digital administrative records combined with new prediction tools developed in machine learning has contributed to increased use of data to inform policy decisions based on predictions. Examples include hiring decisions based on predictions of an employee’s productivity, program services prioritized on predictions of who might benefit the most, allocation of police resources based on predictions about where crime is likely to occur, and pre-trial bail decisions are informed by predictions about risk. These decisions have historically been made by people, who inevitably make inferences, draw conclusions, and bring their own biases to the decision-making process. While machine learning is expected to improve predictions and the quality of the policy decisions that depend on them, Jens Ludwig and his collaborators note that it is possible that the new algorithms may unintentionally exacerbate disparities between groups. Ludwig and his colleagues will investigate the fairness concerns that arise in naturalistic datasets across a range of policy domains, and test the extent to which four different measures to promote algorithmic fairness proposed in the machine learning literature work in practice.