The Seoul Declaration by countries attending the AI Seoul Summit, 21-22 May 2024

Date published:
24 May 2024

Introduction

The Seoul Declaration confirms a shared understanding of the opportunities and risks posed by artificial intelligence (AI). It follows on the work of the Bletchley Declaration.

Ten countries and the European Union signed the Seoul Declaration at the AI Seoul Summit in South Korea on 21 May 2024. Twenty-seven countries and the European Union signed the Seoul Ministerial Statement on 22 May 2024.

The summit resulted in 3 outcome documents: 

Leaders agreed to the Seoul Declaration and the Annex. Ministers agreed to the Seoul Ministerial Statement at day 2 of the event, on 22 May 2024.

These were originally published on GOV.UK.

Seoul Declaration

1. We, world leaders representing Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America, gathered at the AI Seoul Summit on 21st May 2024, affirm our common dedication to fostering international cooperation and dialogue on artificial intelligence (Al) in the face of its unprecedented advancements and the impact on our economies and societies. 

2. Building on the work laid out at the Al Safety Summit held at Bletchley Park in the United Kingdom on November 2023, we recognize that Al safety, innovation, and inclusivity are interrelated goals and that it is important to encompass these priorities in international discussions on Al governance to address the broad spectrum of opportunities and challenges that the design, development, deployment, and use of Al presents and may present. 

3. We recognize the importance of interoperability between AI governance frameworks in line with a risk-based approach to maximize the benefits and address the broad range of risks from AI, to ensure the safe, secure, and trustworthy design, development, deployment, and use of Al. We continue to focus on supporting the operationalisation of the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems. We recognize the particular responsibility of organizations developing and deploying frontier Al, and, in this regard, note the Frontier AI Safety Commitments. 

4. We support existing and ongoing efforts of the participants to this Declaration to create or expand Al safety institutes, research programmes and/or other relevant institutions including supervisory bodies, and we strive to promote cooperation on safety research and to share best practices by nurturing networks between these organizations. In this regard, we welcome the Seoul Statement of Intent toward International Cooperation on Al Safety Science, which is annexed to this Declaration. 

5. We call for enhanced international cooperation to advance Al safety, innovation and inclusivity to harness human-centric Al to address the world's greatest challenges, to protect and promote democratic values, the rule of law and human rights, fundamental freedoms and privacy, to bridge Al and digital divides between and within countries, thereby contributing to the advancement of human well-being, and to support practical applications of Al including to advance the UN Sustainable Development Goals. 

6. We advocate for policy and governance frameworks, including risk-based approaches, that foster safe, innovative and inclusive Al ecosystems. Frameworks should facilitate a virtuous cycle between human creativity and the development and use of Al, promote socio-cultural, linguistic, and gender diversity, and promote environmentally sustainable development and use of technology and infrastructure throughout the life-cycle of commercially and publicly available AI systems. 

7. We affirm the importance of active multi-stakeholder collaboration, including governments, the private sector, academia, and civil society to cultivate safe, innovative and inclusive Al ecosystems, and the importance of cross-border and cross-disciplinary collaboration. Recognizing that all states will be affected by the benefits and risks of AI, we will actively include a wide range of international stakeholders in conversations around AI governance. 

8. We aim to strengthen international cooperation on Al governance through engagement with other international initiatives at the UN and its bodies, G7, G20, the Organization for Economic Co-operation and Development (OECD), the Council of Europe, and the Global Partnership on AI (GPAI). In this light, we acknowledge the Hiroshima AI Process Friends Group, welcome the recently updated OECD AI principles, and the recent adoption by consensus of the United Nations General Assembly resolution "Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development" that solidified the global understanding on the need for safeguards for Al systems and the imperative to develop, deploy, and use Al for good, and welcome discussions on the Global Digital Compact in advance of the Summit of the Future in September 2024 and look forward to the final report of the UN Secretary-General’s High-level Advisory Body on AI (HLAB). 

9. Acknowledging the value of Al Summit dialogues as a high-level forum to advance discussion on Al governance which facilitates Al safety, innovation and inclusivity, we look forward to our third gathering at the upcoming AI Action Summit to be held in France.

Seoul Statement of Intent

1. Gathered at the AI Seoul Summit on 21st May 2024, and following on from the AI Safety Summit in Bletchley Park on 2nd November 2023 and acknowledging the Safety Testing Chair’s Statement of Session Outcomes from the Bletchley Leaders’ Session, world leaders representing Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America affirm the importance of international coordination and collaboration, based in openness, transparency, and reciprocity, to advance the science of AI safety. We affirm that safety is a key element in furtherance of responsible AI innovation. 

2. We commend the collective work to create or expand public and/or governmentbacked institutions, including AI Safety Institutes, that facilitate AI safety research, testing, and/or developing guidance to advance AI safety for commercially and publicly available AI systems. 

2.1 We acknowledge the need for a reliable, interdisciplinary, and reproducible body of evidence to inform policy efforts related to AI safety. We recognize the role of scientific inquiry and the benefits of international coordination for the advancement of such inquiry, so that ultimately the benefits of AI development and deployment are shared equitably around the globe. 

2.2 We affirm our intention to leverage and promote common scientific understandings through assessments such as the International AI Safety Report, to guide and align our respective policies, where appropriate, and to enable safe, secure, and trustworthy AI innovation, in line with our governance frameworks. 

2.3 We express our shared intent to take steps toward fostering common international scientific understanding on aspects of AI safety, including by endeavoring to promote complementarity and interoperability in our technical methodologies and overall approaches. 

2.4 These steps may include taking advantage of existing initiatives; the mutual strengthening of research, testing, and guidance capacities; the sharing of information about models, including their capabilities, limitations, and risks as appropriate; the monitoring of AI harms and safety incidents; the exchange or joint creation of evaluations, data sets and associated criteria, where appropriate; the establishment of shared technical resources for purposes of advancing the science of AI safety; and the promotion of appropriate research security practices in the field. 

2.5 We intend to coordinate our efforts to maximize efficiency, define priorities, report progress, enhance our outputs’ scientific rigor and robustness, promote the development and adoption of international standards, and accelerate the advancement of evidence-based approaches to AI safety. 

3. We articulate our shared ambition to develop an international network among key partners to accelerate the advancement of the science of AI safety. We look forward to close future collaboration, dialogue, and partnership on these and related endeavors.

Seoul Ministerial Statement

Preamble 

P1. We, the Ministers of Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, the Republic of Korea, Rwanda, the Kingdom of Saudi Arabia, the Republic of Singapore, Spain, Switzerland, Türkiye, Ukraine, the United Arab Emirates, the United Kingdom, the United States of America, and the representative of the European Union, based on the discussion at the AI Seoul Summit Minister's Session on 22nd May 2024 on the approaches to AI governance to promote safe, secure and trustworthy AI and sustainable AI development, hereby affirm the need for collaborative international approaches to respond to rapid advancements in AI technologies and their impact on our societies and economies. 

P2. We acknowledge the achievements initiated at the inaugural AI Safety Summit in Bletchley Park. Building upon the three inter-related priorities of safety, innovation, and inclusivity discussed on day 1 of the AI Seoul Summit, we now seek to focus on actionable items to implement them. Acknowledging our different domestic capacities, we affirm our shared intent to take meaningful steps to unlock the benefits of AI for all while addressing its risks. 

Safety 

1.1 It is imperative to guard against the full spectrum of AI risks, including risks posed by the deployment and use of current and frontier AI models or systems and those that may be designed, developed, deployed and used in future. Principles for AI safety and security include transparency, interpretability and explainability; privacy and accountability; meaningful human oversight and effective data management and protection. We encourage all relevant actors, including organizations developing and deploying current and frontier AI, to promote accountability and transparency throughout the AI lifecycle by seeking to assess, prevent, mitigate and remedy adverse impacts which may emerge. We further encourage all relevant actors to foster an enabling environment in which AI is designed, developed, deployed and used in a safe, secure and trustworthy manner, for the good of all and in line with applicable domestic and international frameworks. 

1.2 We recognize our role to establish frameworks for managing risks posed by the design, development, deployment and use of commercially or publicly available frontier AI models or systems in our respective jurisdictions. We recognize our increasing role in promoting credible external evaluations for frontier AI models or systems developed in our jurisdictions, where those models or systems could pose severe risks. We further acknowledge our role in partnership with the private sector, civil society, academia and the international community in identifying thresholds at which the risks posed by the design, development, deployment and use of frontier AI models or systems would be severe without appropriate mitigations. Criteria for assessing the risks posed by frontier AI models or systems may include consideration of capabilities, limitations and propensities, implemented safeguards, including robustness against malicious adversarial attacks and manipulation, foreseeable uses and misuses, deployment contexts, including the broader system into which an AI model may be integrated, reach, and other relevant risk factors. 

1.3 Assessing the risk posed by the design, development, deployment and use of frontier AI models or systems may involve defining and measuring model or system capabilities that could pose severe risks, in context and without appropriate mitigations. We recognize that such severe risks could be posed by the potential model or system capability to meaningfully assist non-state actors in advancing the development, production, acquisition or use of chemical or biological weapons, as well as their means of delivery. We affirm the continuing importance of acting consistently with relevant international law, such as the Chemical Weapons Convention and Biological and Toxin Weapons Convention, UN Security Council Resolution 1540, and international human rights law, in accordance with each state’s obligations. We stress the importance of multilateral discussion to promote AI safety and security. 

1.4 We further recognize that such severe risks could be posed by the potential model or system capability or propensity to evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation conducted without explicit human approval or permission. We note the importance of gathering further empirical data with regard to the risks from frontier AI models or systems with highly advanced agentic capabilities, at the same time as we acknowledge the necessity of preventing the misuse or misalignment of such models or systems, including by working with organizations developing and deploying frontier AI to implement appropriate safeguards, such as the capacity for meaningful human oversight. 

1.5 We acknowledge the importance of constructive dialogue with developers to address the risks of frontier AI models or systems, reaffirming the particular responsibility of developers for the safety of these systems. We further recognize the pressing need to take into consideration safety and security throughout the AI lifecycle. 

1.6 We affirm the unique role of AI safety institutes and other relevant institutions to enhance international cooperation on AI risk management and increase global understanding in the realm of AI safety and security. Through our AI safety institutes or other relevant institutions, we plan to share best practices and evaluation datasets, as appropriate, and collaborate in establishing safety testing guidelines. We aim towards interoperability across AI safety activity, including by building partnerships between AI safety institutes and other relevant institutions, recognizing at the same time the need for testing methodologies considering cultural and linguistic diversity across the globe. 

Innovation 

2.1 We recognize the importance of governance approaches that foster innovation and the development of AI industry ecosystems with the goal of maximizing the potential benefits of AI for our economies and societies. We further recognize the role of governments is not only to prioritize financial investment, R&D, and workforce development for AI innovation, but also to consider governance frameworks which include legal and institutional frameworks, including personal data, copyright and other intellectual property protections for the safe, secure and trustworthy development and deployment of AI. 

2.2 We recognize the transformative benefits of AI for the public sector, including in areas such as administration, welfare, education and healthcare. These benefits include using AI efficiently and effectively through accessible digital services and automated procedures that enhance citizen experience in accessing public services. Furthermore, we intend to support the adoption of AI in key industrial sectors like manufacturing, logistics, and finance to revolutionize productivity, reduce the burden on employees while protecting rights and safety and unlock new avenues for value creation. 

2.3 We are committed in particular, to supporting an environment conducive to AI-driven innovation by facilitating access to AI-related resources in particular for SMEs, startups, academia, universities, and even individuals, while respecting and safeguarding intellectual property rights. Also, we are committed to enhancing the availability of AI-related resources to empower researchers to leverage AI in their respective fields of study and to facilitate the responsible utilization of AI as a tool for enriching individual creative endeavors. 

2.4 We recognize the importance of sustainability and resilience in the ecosystem for AI innovation. In this regard, we encourage AI developers and deployers to take into consideration their potential environmental footprint such as energy and resource consumption. We welcome collaborative efforts to explore measures on how our workforce can be upskilled and reskilled to be confident users and developers of AI to enhance innovation and productivity. Furthermore, we encourage efforts by companies to promote the development and use of resource-efficient AI models or systems and inputs such as applying low-power AI chips and operating environmentally friendly data centers throughout AI development and services. 

Inclusivity 

3.1 In our efforts to foster an inclusive digital transformation, we recognize that the benefits of AI should be shared equitably. We seek to promote our shared vision to leverage the benefits of AI for all, including vulnerable groups. We intend to work together to promote the inclusive development of AI systems and the utilization of safe, secure and trustworthy AI technologies in order to foster our shared values and mutual trust. We recognize the potential of AI for the benefit of all, especially in protecting human rights and fundamental freedoms, strengthening social safety nets, as well as ensuring safety from various risks including disasters and accidents. 

3.2 In furtherance of our shared goal to inclusivity, we are committed to promoting AI education including through capacity-building related to AI systems and through increased digital literacy, contributing to bridging AI and digital divides between and within countries. We recognize the need to strengthen international cooperation in joint research and talent development, including with developing countries to enhance their capabilities in AI design, development and utilization. We seek to ensure socio-cultural and linguistic diversity is reflected and promoted in the AI lifecycle of design, development, deployment, and use. 

3.3 We are committed to supporting and promoting advancements in AI technologies, recognizing the potential to provide significant advances to resolve the world’s greatest challenges such as climate change, global health, food and energy security and education. We further seek to foster inclusive governance approaches by encouraging the participation of developing countries in joint efforts and discussions aimed at accelerating progress toward achieving the Sustainable Development Goals and promoting global common interests and developments. 

Way Forward 

W1. We commend the efforts undertaken by the Republic of Korea and the United Kingdom as the two co-chairs of the AI Seoul Summit on the agendas of safety, and sustainability and resilience. 

W2. We note the publication of the independent interim International Scientific Report on the Safety of Advanced AI and its work to facilitate a shared evidence-based understanding of the risks associated with frontier AI. We resolve to work together to advance future evidence-based reports on AI risk, and look forward to the final publication of the International Scientific Report on the Safety of Advanced AI ahead of the AI Action Summit in France. 

W3. We acknowledge the need to advance the science of AI safety and gather more empirical data with regard to certain risks, at the same time as we recognise the need to translate our collective understanding into empirically grounded, proactive measures with regard to capabilities that could result in severe risks. We plan to collaborate with the private sector, civil society and academia, to identify thresholds at which the level of risk posed by the design, development, deployment and use of frontier AI models or systems would be severe absent appropriate mitigations, and to define frontier AI model or system capabilities that could pose severe risks, with the ambition of developing proposals for consideration in advance of the AI Action Summit in France. 

W4. We reaffirm our shared intent to guide the design, development, deployment, and use of AI in a manner which harnesses its benefits for good. With the recognition that safety, innovation, and inclusivity are inter-related goals, we look forward to continuing our collaboration to advance discussions on AI governance and promote safe, secure and trustworthy AI for the good of all.