Peter Eckersley: Predicting Guilt and Automating [De]incarceration
August 20 @ 6:00 pm - 9:00 pm PDT
About this Event
Predicting Guilt and Automating [De]incarceration — Algorithms in the US Criminal Justice System — a Foresight Strengthening Civilization salon series with Peter Eckersley in discussion with Lou de Kerhuelvez.
Are we ready for AI judges?
As automation is increasingly deployed to assist or replace human decisions, it becomes crucial to evaluate potential social and ethical consequences of AI powered decision-making.
Peter Eckersley will be discussing the Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System recently published by The Partnership on AI (PAI).
This report raises serious concerns on risk assessment tools in the U.S. criminal justice system, most particularly in the context of pretrial detentions. Issues include:
- Bias in the tools themselves;
- Problems with the human-tool interface
- Questions of governance, transparency, and accountability.
These concerns are nearly universal in the AI research community as they apply to most attempt to use data to train statistical models or to create heuristics to make decisions that have social and ethical implications.
Peter led the effort of the Partnership on AI to convene the machine learning research community and produce a shared position on the algorithmic risk assessment tools that are in widespread use throughout the US criminal justice system, and have now been mandated by Californian legislation.
There was widespread agreement that the current tools are deeply flawed on statistical, procedural and bias grounds, though some disagreement about whether they could conceivably be improved enough to be constructive. To synthesize across those views, the report identified 10 requirements that would need to be met before their use could even conceivably be appropriate for the incarcerative purposes they are often being employed.
This salon will outline both what PAI learned along the way, and how this debate fits into the larger context of mass incarceration and criminal justice reform in the United States.
Read the report here: https://www.partnershiponai.org/report-on-machine-learning-in-risk-assessment-tools-in-the-u-s-criminal-justice-system/
About the speaker:
Peter Eckersley is Director of Research at the Partnership on AI, a collaboration between the major technology companies, civil society and academia to ensure that AI is designed and used to benefit humanity. He leads PAI’s research on machine learning policy and ethics, including projects within PAI itself and projects in collaboration with the Partnership’s extensive membership. Peter’s AI research interests are broad, including measuring progress in the field, figuring out how to translate ethical and safety concerns into mathematical constraints, and setting sound policies around high-stakes applications such as self-driving vehicles, recidivism prediction, cybersecurity, and military applications of AI.
Prior to joining PAI, Peter was Chief Computer Scientist for the Electronic Frontier Foundation. At EFF he lead a team of technologists that launched numerous computer security and privacy projects including Let’s Encrypt and Certbot, Panopticlick, HTTPS Everywhere, the SSL Observatory and Privacy Badger; they also worked on diverse Internet policy issues including campaigning to preserve open wirelessnetworks; fighting to keep modern computing platforms open; helping to start the campaign against the SOPA/PIPA Internet blacklist legislation; and running the first controlled tests to confirm that Comcast was using forged reset packets to interfere with P2P protocols.
Snacks and Drinks will be provided