Our Blog
/
Blog
Other

Join The Partnership at FAT* 2020

$hero_image['alt']

Building Community around Fairness, Accountability, and Transparency in AI

The impact of algorithmic systems on fairness, transparency, and accountability (FTA) is at the heart of the ACM FAT* Conference (January 27-30, 2020) where, for the third year, organizers are convening a diverse community of scholars and practitioners from computer science, law, social sciences, and humanities to address this important topic. This issue is also a focus for PAI, as FTA in AI is one of our central thematic pillars.

At this year’s ACM FAT*, PAI is looking forward to sharing a number of recent projects  – ranging from multistakeholder work to academic research – that address FTA issues, with a focus on bridging the gap between principles and practice.

Explainability techniques are key to transparency, accountability and fairness. PAI’s research is among the first to examine how ML explainability techniques are actually being used in practice. Our conference paper on this work – “Explainable Machine Learning in Deployment” – will be presented on Thursday, January 30th at Session 14. Through interviews with ML developers, PAI researchers  Alice Xiang and Umang Bhatt found that explainability techniques are not yet up to the task of enhancing transparency and accountability as intended. Additional improvements are necessary in order for these techniques to truly help end users, policymakers, and other external stakeholders understand and evaluate automated decisions.

PAI will also be sharing our research on the impact of organizational culture and structure on efforts to implement fair, accountable, and transparent ML at an interactive Implications Tutorial on Monday, January 27.  Led by Jingying Yang, PAI program lead, and researchers at Accenture, the event will also include an interdisciplinary discussion and facilitated design-thinking session, intended to advance understandings of current challenges and solutions for implementing fair-ML.

On Wednesday, January 29, PAI will host an Interactive Happy Hour on ABOUT ML. PAI’s ABOUT ML initiative is an ongoing multistakeholder effort to promote responsible AI development using the lever of documentation as both a process and an artifact.  At the event, which is open to anyone interested in PAI’s work, participants can mingle and meet the PAI team, and will also have an opportunity to contribute to the ABOUT ML database of documentation questions, which aims to adapt existing research to additional contexts for ML system deployment.

Additional FTA work at PAI explores the intersection between algorithmic fairness and the law, with a focus on divergences between the principles and intentions of anti-discrimination law and the technical realities of algorithm development. For instance, PAI’s research paper On the Legal Compatibility of Fairness Definitions, recently presented at NeurIPS, examines the tensions between legal and machine learning definitions of fairness. These distinctions are important – machine learning developers need to understand and incorporate legal understandings in order to produce algorithms with fair outcomes. Additional work, currently in progress, is exploring parameters for the responsible use of demographic data to mitigate algorithmic bias and discrimination. Another project builds on PAI’s criminal justice report to examine algorithmic bias in recidivism data.

PAI is also currently seeking two fellows for work related to FTA issues. Our Methods for Inclusion Research Fellowship is designed to help PAI develop systems to include diverse perspectives in our research and policy work – especially those that are otherwise underrepresented in the AI ecosystem. PAI’s Diversity and Inclusion Fellow will conduct research focused on understanding and promoting diversity and inclusion in the field of artificial intelligence. Interested researchers are encouraged to apply.

The PAI team is looking forward to learning from our community and sharing our efforts to advance responsible AI at the 2020 FAT* conference.  If you are attending, please join our presentationtutorial and Interactive Happy Hour. In addition to PAI’s presence, we’re proud to spotlight the below contributions from our Partner community at this year’s FAT*.

Tutorials featuring PAI Partners:

  1. Data & Society / Article 19 – Leap of FATE Human Rights as a Complementary Framework for AI Policy and Practicehttps://fat2020tutorials.github.io/human-rights/
  2. Google – Probing ML Models for Fairness with the What-If Tool and SHAPhttps://pair-code.github.io/what-if-tool/fat2020.html
  3. Google – Translation Tutorial: Positionality-aware Machine Learninghttps://sites.google.com/view/ai-kaleidoscope/fat-2020-tutorial
  4. IBM – Hands-on Tutorial: AI Explainability 360https://github.com/IBM/AIX360/wiki/ACM-FAT*2020-Tutorial
  5. Microsoft – The Meaning and Measurement of Biashttps://azjacobs.com/measurement
  6. PAI / Accenture – How we organize shapes the work that we dohttps://organizingandai.github.io

Conference Papers authored by members of the Partner research community:

  1. Closing the AI Accountability Gap: Defining a SMArTeR Framework for Internal Algorithmic Auditing – D. Raji; A. SMART; R. White; M. Mitchell; T. Gebru; B. Hutchinson; J. Smith-Loud; D. Theron; P. Barnes (Session 1: Accountability)
  2. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making- Y. Zhang; Q. Liao; R. Bellamy (Session 6:Values)
  3. Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning – E. Jo; T. Gebru (Session 7: Data Collection)
  4. Data in New Delhi’s predictive policing system – V. Marda; S. Narayan (Session 7: Data Collection)
  5. Quantifying the impact of overbooking on a pre-trial risk assessment tool – K. Lum; C. Boudin; M. Price (Session 10: Auditing/Assessment 2)
  6. Towards a Critical Race Methodology in Algorithmic Fairness – E. Denton; A. Hanna; J. Smith-Loud; A. Smart (Session 11: Sensitive Attributes 2)

CRAFT Sessions featuring PAI Partners:

  1. Creating Community-Based Tech Policy: Case Studies, Lessons Learned, and What Technologists and Communities Can Do Together
    Jennifer Lee (ACLU of Washington), Shankar Narayan (ACLU of Washington), Hannah Sassaman (Media Mobilizing Project), Jenessa Irvine (Media Mobilizing Project)
  2. From Theory to Practice: Where do Algorithmic Accountability and Explainability Frameworks Take Us in the Real World
    Fanny Hidvegi (Access Now), Anna Bacciarelli (Amnesty International), Katarzyna Szymielewicz (Panoptykon Foundation), Matthias Spielkamp (AlgorithmWatch)
  3. Algorithmically Encoded Identities: Reframing Human Classification
    Dylan Baker (Google), Alex Hanna (Google), Emily Denton (Google AI)
  4. Bridging the Gap from AI Ethics Research to Practice
    Kathy Baxter (Salesforce)
  5. When Not to Design, Build, or Deploy
    Solon Barocas (Microsoft Research New York, Cornell University), Asia J. Biega (Microsoft Research Montréal), Benjamin Fish (Microsoft Research Montréal), Luke Stark (Microsoft Research Montréal), Jedrzej Niklas.
  6. Manifesting the Sociotechnical: Experimenting with Methods for Social Context and Social Justice
    Ezra Goss (Georgia Institute of Technology), Lily Hu (Harvard University), Stephanie Teeple (University of Pennsylvania), Manuel Sabin (UC Berkeley)
  7. Centering Disability Perspectives in Algorithmic Fairness, Accountability & Transparency
    Alexandra Givens (Georgetown Institute for Tech Law & Policy), Meredith Ringel Morris (Microsoft Research)
  8. Zine Fair: Critical Perspectives
    Emily Denton (Google), Alex Hanna (Google)