Our Blog
/
Blog

PAI Developing Ethical Guidelines for Synthetic Media

$hero_image['alt']

The technologies that make synthetic forms of media like deepfakes possible can and will be used to manipulate, harass, and misinform.  For example, deepfakes have provoked gendered harassment around the world. As I observed last year, however, these same tools can also be used to speak truth to power, protect a person’s privacy, or even preserve historical memory. Given synthetic media’s potential for both positive and negative purposes, how do we ensure that it is used for good?

Since 2018, Partnership on AI (PAI) has convened stakeholders from around the world to explore this question. With representatives from industry, civil society, academia, media, and journalism, we have evaluated both the limitations of and opportunities for synthetic media. We are excited to announce the next step for that work, the forthcoming Synthetic Media Code of Conduct, which will set normative guidelines for the use of synthetic media.

This Synthetic Media Code of Conduct will be developed with leadership from WITNESS, Adobe, and Microsoft, the collective input of our broader Partner community, and others in the synthetic media ecosystem. Over the next several months, we will work with these stakeholders to craft guidelines that can influence norms and behaviors across sectors and for those developing synthetic media technologies, creating synthetic media, and distributing synthetic media. We invite interested parties to connect and collaborate with us as we develop this resource.

Preparing for a more synthetic future requires both introspection and action. This means answering technical questions related to creating and addressing synthetic media as well as examining the policies and social infrastructure that influence how technology is developed and used. This is why PAI has not only studied and weighed in on the technical limits of synthetic media detection strategies, and supported the development of authenticity infrastructure, but also grounded our recommendations in actual understanding of the people coming into contact with the media that is manipulated via AI.

Synthetic media’s impact will be felt by individuals across society globally, including artists, satirists, computer scientists, policymakers, journalists, activists, fact-checkers, misinformation researchers, and laypeople, including those targeted by malicious uses of the technology. It will also affect complex societal dynamics, like polarization, misinformation, privacy, self-expression, and democracy. PAI is thrilled to get started on this important project with our Partners.

“The PAI Code of Conduct is the multi-stakeholder effort the technology sector, and media in general needs right now,” said Andy Parsons, Senior Director, Content Authenticity Initiative at Adobe. “There are techniques and ethical practices that are being brought together in a balanced, careful way with a focus on pragmatic solutions. I’m confident the impact of the work will be wide-ranging and positive, and Adobe is honored to be a part of it.”

“As synthetic media becomes more prevalent and increasingly difficult to distinguish from reality, it’s essential to establish best practices on methods and tools, journalistic inquiry, and media literacy,” said Eric Horvitz, Chief Scientific Officer at Microsoft and Chair of PAI’s Board of Directors. “We at Microsoft are enthusiastic about multiparty stakeholder efforts to establish guidance and guardrails, and we appreciate PAI’s leadership in moving this valuable conversation forward.”

“The WITNESS’s ‘Prepare, Don’t Panic’ initiative focuses on countering the malicious uses of deepfakes and synthetic media,” said Sam Gregory, Program Director at WITNESS. “We center the threats experienced and solutions wanted by vulnerable and marginalized communities globally facing existing problems of media manipulation, state violence, gender-based violence and misinformation/disinformation. We push for the solutions needed by critical journalists and human rights defenders worldwide. A focus from the start of technology development on ensuring strong ethical boundaries is one step we’ve identified as synthetic media’s power for creativity grows, but also its potential to harm. We’re glad to partner with PAI on building a code of conduct that works.”

It is PAI’s hope that you will join us in grappling with the tradeoffs afforded by a dual-use technology like synthetic media, and if relevant, consider adopting the eventual code of conduct. Of course, norm-setting is one of many policy instruments for affecting technology development and use. We recognize that the broader policy community will need to consider additional levers for affecting technology and we look forward to collaborating with global policy leaders to refine and add precision to synthetic media policies.

If you’re interested in learning more about this exciting work, please reach out to claire@partnershiponai.org.