Language : English 简体 繁體
Security

Fu Ying: As long as U.S. and China can cooperate and work together, we’ll find a way to keep AI under control.

Feb 17, 2025

The Paris AI Action Summit was held on February 10 and 11. AI experts, policymakers and industry leaders from various countries gathered to engage in in-depth discussions about opportunities and challenges in AI development and governance. Chinese scholars, led by former Chinese vice-foreign minister Fu Ying, attended the Paris AI Summit. Below are the main points in Fu Ying’s Remarks on AI Safety and Governance at the Paris AI Summit.

Fu Ying.png

China's former vice-foreign minister and former UK ambassador, Fu Ying, on stage at a panel discussion in Paris.
 

On February 10th in Paris, Fu Ying stated that global governance of AI safety should transcend geopolitical interference, and China is willing to work with the international community to jointly promote the development of AI safety.

The Paris AI Action Summit was held on February 6-11, 2025, which aimed to enhance the international community’s ability to act and address issues related to the application and global governance of AI. The “Governing in the Age of AI” forum hosted by the Tony Blair Institute for Global Change (TBI) was one of the side events. Former Italian Prime Minister Matteo Renzi, who also serves as a strategic advisor to TBI, attended and delivered the opening speech.

Fu Ying was invited as a guest speaker for the “Advancing the Science of AI Safety” session, sharing the stage with other guests including Yoshua Bengio, Professor at the University of Montreal and founder of Mila AI Research Institute, and Alondra Nelson, Professor at Princeton Institute for Advanced Study. During the session, they engaged in lively discussion. Afterward, Fu Ying delivered a speech at the closed-door dinner and engaged in discussions with experts and scholars from various countries on issues of common concern during the summit.

When asked about how people in China view AI safety issues and why China hasn’t established an “AI Safety Institute (AISI)”, Fu Ying said, China has established the “Chinese AI Safety and Development Network” with government support, which is equivalent to other countries’ AISIs. China has a diverse and pluralistic ecosystem of AI technology application and safety governance, with multiple government departments, institutions, and enterprises focusing on and investing in AI safety issues. The aim of establishing such a network, rather than a research institute, is to allow everyone to participate, share knowledge and information, strengthen capacity building, and actively engage in international dialogue and cooperation. China participated in the UK’s Bletchley Summit and has been following the development process of various countries’ AISIs. China’s technological community also maintain close communication with their international counterparts.

When asked about how the Chinese view the risks brought by AI technology development and application, Fu Ying said, most people look at the safety issue on two levels. One is in application. The Chinese government released the “New Generation AI Development Plan” as early as 2017, emphasizing on safe, controllable, and sustainable AI progress. Currently, China’s AI applications are spreading fast in a comprehensive manner, including in economic, finance, urban management, healthcare, and science research. Risks and challenges have emerged simultaneously, with urgent demands for government regulations and for technical solutions to address these risks. Based on experience in cyber laws and regulations, the Chinese government has successively issued laws and normative documents for AI regulations, guided by the principle of maintaining a balance between encouraging innovation and mitigating risks. Meanwhile, a batch of tech companies specialized in addressing AI safety issues has also emerged.

The other is the risks accompanied by AI technology development. China released the “Global AI Governance Initiative” in October 2023, emphasizing beneficial principle of AI development and advocating for establishing risk level testing and evaluation systems. China also signed the Bletchley Declaration. On July 1, 2024, at the 78th UN General Assembly, a China-led resolution on enhancing international cooperation in AI capacity building was adopted by consensus, with the support of over 140 countries. China takes interest and pay high attention to international discussions about AGI risks, and China’s science and technology community maintains close communication with international peers and is broadly aligned in their thinking on AI safety issues. The “International AI Safety Report” led by Yoshua received much attention in China. The report’s statement that “it will be the decisions of societies and governments on how to navigate this uncertainty that determine which path we will take” was particularly thought provoking.

When asked about China-U.S. relations and how the two countries could cooperate on AI, Fu Ying said that, few see much promise for China-U.S. AI cooperation, as geopolitical tensions continue to cast a shadow over scientific collaboration. She recalled that in 2019, when Henry Kissinger and Eric Schmidt attended an AI safety roundtable in Beijing, they discussed concerns about the future risks of AI. She responded that, “as long as the U.S. and China can cooperate and work together with the rest of humanity, we’ll find a way to keep AI under control. But if countries remain at odds and even use advanced AI systems against each other, the machines are more likely to gain the upper hand.”

Past years have witnessed consistent US efforts to block China’s technological progress, poisoning the atmosphere for cooperation. . If we mapped the global landscape of the third decade of the 21st century, we would see an exponential curve of technological innovation rising steeply. At the same time, we would also observe the downward trajectory of China-US relations. The extension of these two lines has led to an intersection. In this era of technological explosion, when humanity most needs to mobilize all wisdom and energy for cooperation, some major countries are attempting to shut down collaborative platforms.

The phenomenon has led to two trends. One is the American tech giants’ lead in the virtual world with rapid progress in cutting-edge AI innovation, supported by enormous capital. The other is China’s lead in the real-world with wide application of AI technology, driving wide and deep technological applications and innovations, backed by powerful manufacturing and a vast market. If history is a guide, one can naturally expect that the combination of these two forces would be the best path forward for the safe and responsible application of AI. But given the past few years, many see no such prospect. Therefore, geopolitical interference can be considered a third level of concern regarding AI safety.

China maintains a relatively calm attitude toward China-U.S. cooperation and global governance, advocating to respect each other's core interests and major concerns on divergent issues, adhering to the principles of mutual respect, peaceful coexistence, and win-win cooperation.

Regarding the international debate on open-source or closed-source path for AI development, Fu Ying noted that from the perspective of China’s academic community and enterprises, open-source, despite its risks, is beneficial for identifying and addressing safety vulnerabilities in a timely manner, and aligns with the Chinese belief of developing AI technology to benefit the people. In comparison, the current complete opacity of some large companies’ models is more concerning. Yoshua believed that open-source AI technology could be misused by bad actors. However, he also acknowledged that using open-source architecture made it easier to identify potential problems.

(Fu Ying is the former Vice Minister of the Ministry of Foreign Affairs of China. This article is based on her speech and remarks during the AI Action Summit in Paris.) 

 

You might also like
Back to Top