Predictive models are increasingly used in the criminal justice system to predict who will commit crime in the future and where crimes will occur.
Because decisions influenced by models in this setting impact individuals’ liberty, it is of the utmost importance that predictions generated by the models be ‘fair’.
Using examples from predictive policing and recidivism risk assessment, Dr Kristian Lum will demonstrate how – if considerations of fairness and bias are not explicitly accounted for – such models could perpetuate and, under some circumstances, amplify undesirable historical biases encoded in the data.
Dr Lum will then give a brief overview of several notions of fairness that have been proposed in the ‘algorithmic fairness’ literature as solutions to these problems. She will close with a discussion of the ways in which policy, rather than data science, influences the development of these models and some alternative non-algorithmic solutions to the underlying problems these models seek to address.
About the speaker
Dr Kristian Lum is Lead Statistician at the Human Rights Data Analysis Group (HRDAG), where she leads the HRDAG project on criminal justice in the United States.
Dr Lum’s research primarily focuses on examining the uses of machine learning in the criminal justice system and has concretely demonstrated the potential for machine learningbased predictive policing models to reinforce and, in some cases, amplify historical racial biases in law enforcement.
Read more about the Ihaka Lecture Series.