Our Blog
/
Blog

How AI Conferences Can Foster a Responsible Research Culture

$hero_image['alt']

As AI becomes more widespread, so, too, does awareness of this technology’s potential harms. Whether through unintended or malicious use, accidents, or inappropriate application, AI research can have negative downstream consequences for the environment, marginalized communities, and society at large. Knowing this, how can AI researchers better anticipate the impact their work will have on the world? And what can the international AI conferences where original research is presented do to foster this kind of reflection?

Earlier this year, the Ada Lovelace Institute, CIFAR, and Partnership on AI brought together more than 40 AI ethics experts and organizers of recent machine learning (ML) conference organizers to help answer these questions. Together, they considered what conference organizers could do to encourage the habit of reflecting on potential downstream impacts of AI research among authors submitting papers for conference review. A new report co-authored by all three organizations, “A Culture of Ethical AI,” synthesizes the insights gathered from this convening, offering five “big ideas” that conference organizers could put in place to promote responsible research.

“Conference organizers have an important role to play in developing a culture of ethical AI — and striving for alignment in best practices will be critical in the coming years to establish effective oversight of AI research,” said Christine Custis, Director of Programs and Research at Partnership on AI. “Through our collaboration with CIFAR and Ada Lovelace Institute on the workshop and resulting report, we hope to encourage the research community to be more responsible for their actions but also in pursuing work that is ultimately in the interest of humanity.”

“AI has amazing potential for doing a lot of good in our world. But it also carries tremendous potential for harm, if not conducted responsibly,” said Elissa Strome, Executive Director, Pan-Canadian AI Strategy, at CIFAR. “CIFAR is pleased to partner with our international collaborators Partnership on AI and the Ada Lovelace Institute in sharing these fantastic conversations and tools developed through this workshop, which we hope conference organizers worldwide can adapt in their own activities to help spread the practice of responsible AI.”

“The growth of AI research has taken place alongside an increased awareness of the potential harms of these systems for individuals, society and the environment. AI conference organizers have a clear and important role to play in shaping the culture of AI research through the implementation of ethical review practices, as well as other interventions encouraging responsible and reflective research,” said Andrew Strait, Associate Director, Research Partnerships, at the Ada Lovelace Institute. “The Ada Lovelace Institute is pleased to have partnered with CIFAR and Partnership on AI to deliver this important report providing a range of suggested best practices. We hope that it will empower conference organizers to put responsible AI at the heart of what they do.”

In the last few years, some leading AI and ML conferences have piloted interventions to increase reflection among AI researchers, encouraging authors to discuss the ethical considerations of their choices and the impact of their work. The successes, challenges, and potential downsides of these recent pilots need to be better understood, however, and many conferences have yet to implement similar practices.

With that in mind, PAI partnered with CIFAR and the Ada Lovelace Institute to create a space for members of the international AI community to discuss such efforts and consider how conference organizers can further advance AI ethics. The workshop that followed included dozens of attendees who convened to discuss the important role of conference organizers in developing a culture of ethical AI, share ideas, and consider how to make interventions more effective.

Synthesizing the workshop discussions, the report recommends that organizers across different AI conferences continue to collaborate more closely in forums like our workshop and others to share lessons learned and discuss community-wide approaches for encouraging more ethical reflection. Furthermore, the report provides five big ideas that conference organizers can implement to build the research community’s capacity to anticipate downstream consequences of their work and mitigate potential risks:

  1. AI conference organizers can consider a mix of prescriptive and reflexive interventions to improve researchers’ ability to assess the ethical impacts of their work
  2. Conference organizers should prioritize training more researchers and conference reviewers on how to examine the potential negative downstream consequences of their work
  3. Organizers should engage with research stakeholders including impacted communities to understand how conferences can empower them
  4. Organizers could spotlight exceptional technical and ethically sound submissions
  5. Conference organizers could incentivize more deliberative forms of research by enacting policies such as revise-and-resubmit and rolling submissions

DOWNLOAD THE REPORT

We view this report as a menu of options that future AI and ML conference organizers can select from, pilot, and further iterate on. We hope that the workshop it was based on is just the first of many such venues to develop community-wide approaches to anticipation of societal impacts of AI research. If your institution is interested in joining the effort to create similar multistakeholder spaces, please get in touch.

If you are a conference organizer interested in introducing one of the interventions mentioned here, please reach out to Madhulika Srikumar, PAI’s Program Lead for Safety Critical AI. We would love to hear from you.