Our Blog
/
Blog
Other

UNGA 78 – AI Governance Requires Global Action

$hero_image['alt']

As awareness of AI’s potentially world-changing impact has grown, so, too, have calls for new forms of governance to guide this technology. From my experience working at the intersection of society and tech, this is not a choice between innovation and speed on one hand and responsibility and safety on the other. Rather, in order to capture the benefits of technological innovation for people and society we must attend to the enduring challenge of bridging the gap between the body of evidence on current risks, emerging scholarship on long-term risks, and the development of a range of inclusive and adaptive governance approaches. And we must do it at pace. 

It is important to remember that the fundamentals of good governance are well-established. At its best, it is the result of many voices coming together to debate, learn, and, whenever possible, act and speak together — an approach typified by the United Nations General Assembly (UNGA). As UNGA meets in New York this week to kick off its 78th session, the topic of how to govern AI will be front and center.

Those looking for guidance on how to maximize AI’s benefits will find no shortage of suggestions. In recent months, a variety of proposals have been forwarded by both public and private entities on how AI should be governed. Actually arriving at any of these endpoints, however, will be more difficult than just imagining them. Establishing effective governance will require the kind of collective action that can only be achieved with the sustained support of stakeholders across the AI ecosystem. Key to building that support will be confidence by all parties that their perspectives and expertise have been meaningfully incorporated, a process of multistakeholder collaboration that is core to my vision for Partnership on AI (PAI).

Convening across sectors, borders, and perspectives is crucial if we are to safeguard society and advance AI’s benefits for all.

Additionally, the most important challenges related to AI will cross borders and disciplines — and so will their solutions. Since April, PAI has been leading the development of protocols for the safe and responsible deployment of foundation AI models, which are of particular public interest given their wide range of capabilities, including for generative AI. Experts drawn from PAI’s global community of civil society, industry, and academia have been working on this set of adaptable guidelines for identifying and mitigating risks associated with these new forms of advanced AI. I am truly honored to be guided by the impressive and diverse community of experts who have engaged with us, bringing the required perspectives for this challenging work. Next month, we will be releasing these protocols for public comment. Open knowledge-sharing will be essential for anticipating and responding to the wide range of impacts AI may have.

At the recent UN Security Council meeting on AI, Secretary-General António Guterres urged members to “to approach this technology with a sense of urgency, a global lens, and a learner’s mindset.” I couldn’t agree more. This is what meeting today’s moment requires. Convening across sectors, borders, and perspectives is crucial if we are to safeguard society and advance AI’s benefits for all. This is a crucial time for multistakeholder organizations to engage in setting the guardrails for safe and responsible AI development and for widening the community guiding the future of AI. We cannot delay.  

For more than seven decades, government leaders from around the world have met at the UN each year to address the most pressing issues facing the international community.  This week, I look forward to sharing this message as I participate in events organized around the new UNGA session, including those organized by the Secretary-General’s Envoy on Technology, Amandeep Gill.