Addressing Discrimination in Prediction Policy Problems

Awarded Scholars:
Jens Ludwig, University of Chicago
Sendhil Mullainathan, Harvard University
Jon Kleinberg, Cornell University
Benjamin Keys, University of Pennsylvania
Project Date:
Apr 2018
Award Amount:

The growing availability of digital administrative records combined with new prediction tools developed in machine learning has contributed to increased use of data to inform policy decisions based on predictions. Examples include hiring decisions based on predictions of an employee’s productivity, program services prioritized on predictions of who might benefit the most, allocation of police resources based on predictions about where crime is likely to occur, and pre-trial bail decisions are informed by predictions about risk. These decisions have historically been made by people, who inevitably make inferences, draw conclusions, and bring their own biases to the decision-making process. While machine learning is expected to improve predictions and the quality of the policy decisions that depend on them, Jens Ludwig and his collaborators note that it is possible that the new algorithms may unintentionally exacerbate disparities between groups. Ludwig and his colleagues will investigate the fairness concerns that arise in naturalistic datasets across a range of policy domains, and test the extent to which four different measures to promote algorithmic fairness proposed in the machine learning literature work in practice. 


RSF: The Russell Sage Foundation Journal of the Social Sciences is a peer-reviewed, open-access journal of original empirical research articles by both established and emerging scholars.


The Russell Sage Foundation offers grants and positions in our Visiting Scholars program for research.


Join our mailing list for email updates.