The annual Munich Security Conference, the premier security forum in Europe, offers an important platform for transatlantic countries to discuss international security issues and align their security and defense policies. Its agenda often provides comprehensive insights into the security concerns of transatlantic nations, especially European elites.
Since 2023, the boom of large language models and the rise of ChatGPT have left the international community deeply anxious about security challenges that artificial intelligence may bring. In fact, AI-related concerns were high on the agenda of the recently concluded Munich conference. There were three main-stage events directly related to emerging technologies, with more than 20 side events and dinner parties dedicated to AI, which accounted for one-fifth of all discussions. Indeed, AI has emerged as one of the most talked-about issues, alongside the Russia-Ukraine conflict, the Palestinian-Israeli conflict and supply chain security.
Amid increasing global geopolitical tensions and economic uncertainty, the security conference was marked by an overwhelming emphasis on issues such as supply chain and food security to climate change and the refugee crisis. Even discussions on education, culture and scientific research were frequently colored by security considerations. Discussions about emerging technologies, an important driving force for global prosperity, also fell victim to the security obsession.
The participants were highly concerned about the risks arising from the misuse of AI. Their biggest concern was what false information could bring to democratic systems. Interestingly, on the very day the conference commenced, OpenAI released Sora, a new model that can create a 60-second video from text instructions that triggered heated discussions among the participants. While impressed by the rapid technological development, they focused on the potential amplification of false information as digital image generation models come on the scene. Many speakers noted that the foundation of Western democratic systems could be shaken if false information generated by AI technologies were to be exploited for political purposes — for example, to disrupt elections in 2024, an election year in which more than 4 billion people in more than 40 countries and regions will cast votes.
On the same day, 20 leading tech and AI companies signed a voluntary agreement committing to combat AI misinformation intended to disrupt global elections. Among them were OpenAI, Anthropic, Google, Microsoft, Amazon and social platforms such as TikTok, X and Facebook. Together they announced they would jointly develop open-source tools to counter deceptive AI-generated content that is produced to mislead voters, monitor the dissemination of such content in the cyberworld, bolster the public’s awareness and media literacy and foster cross-industry capabilities to counter such content.
The announcement of the agreement, however, did not seem to inject confidence into the discussion in Munich. Instead, the commitments outlined in the agreement were criticized for lack of clarity and mandatory binding force. Some European participants even openly questioned the professional ethics of large AI companies, asking whether the AI companies, which are not elected by the people but have huge technical power, can be trusted.
Second, despite efforts to align their approaches to AI governance, the United States and Europe are still divided on some key issues. In the past few years, the two sides have increased their coordination in the field of digital technology by establishing the U.S.-EU Trade and Technology Council and signing OECD agreements on digital services taxes. But major divisions persist in AI regulatory models. At the security conference, differences between the American model and European model were repeatedly mentioned. The EU is worried that unregulated digital technology will expose European citizens to data security breaches and harmful content, while the United States is concerned about the impact of possible regulations on its tech industry. finding mutually beneficial solutions across the Atlantic then became one of the most prominent topics in the many closed-door discussions and corridor conversations.
During discussions on AI and emerging technologies, U.S. representatives tried their best to win over their European counterparts by emphasizing the need for enhanced security of technology. However, they confused competition between systems with competition in technology, arguing that they must unite to deal with competition and the challenges posed by China. Yet divergent views existed on this proposition. In a panel discussion — Net(work) Gains: Aligning Transatlantic Tech Governance — I asked the panelists: “How do you view China’s role in global AI governance, and are you willing to strengthen cooperation with China in this field?”
Margarethe Vestager, executive vice-president for Europe Fit for the Digital Age at the European Commission, advocated dialogue and cooperation with China, despite differences on certain issues. Alex Karp, co-founder and chief executive officer of Palantir Technologies Inc. in the United States, claimed that there should be no cooperation in any form with China, nor should any technology be transferred to China.
As for that latter point, panelists engaged in a heated debate. Kurt Siever, chairman and CEO of NXP Semiconductors, headquartered in the Netherlands, believed that it is still necessary to make a distinction between civilian technology and military technology. It is advisable to strengthen cooperation with China in civilian technology, he argued, because China has a huge market and many leading tech companies. On the other hand, Karp emphasized that it is naive to believe that military and civilian technology can be clearly distinguished, and that all cooperation with China should be strictly restricted to ensure security.
This scene seemed familiar and reminded me of a similar episode at MSC 2021. Back then, Nancy Pelosi, speaker of the U.S. House of Representatives, went to great lengths to persuade the Europeans to steer clear of Huawei 5G equipment in order to maintain Europe’s values and security. In the following years, Washington has successfully persuaded Europeans not to use Chinese 5G equipment, but it does not seem to have provided the Europeans with better technical alternatives.
Third, the recent conference placed a significant emphasis on addressing risks brought by artificial intelligence technology from the perspective of geopolitical competition. The pre-conference MSC Report played an important role of shaping the direction of discussions and setting the tone of the conference. In its technology section, the report underscored that AI technology will be a key determinant of geopolitical power in the coming decades and that throughout the tech sector, global cooperation has given way to geopolitical competition. According to the report, China and the U.S. are vying for dominance in AI, and as nations increasingly use technology to gain dominance over their geopolitical rivals, the new trends of tech weaponization and de-integration have repercussions for international security.
Many discussions on AI and emerging technologies were exclusive and held behind closed doors, /few, if any, Chinese participants were invited. Based on my observation of several discussions and my interactions with U.S. and European experts on different occasions, it became evident that the logic of geopolitical competition dominated these tech discussions, and some participants even tried to deal with technological risks through political means. For example, tech competition was framed within the context of autocracy vs. democracy; the global technology landscape was divided into competing factions; and there were proposals to maintain the security of small groups by outmaneuvering rivals. Consequently, MSC 2024 failed to present positive suggestions for global cooperation to tackle technological risks, nor did it provide any effective plan for promoting global supervision of such risks.
I also noticed that discussions on AI and emerging technologies usually included a representative from the Global South, a move seemingly designed to swing his or her representative toward Western values. The representatives took a cautious approach, however, refusing to take an explicit position in the debate on digital autocracy versus democracy. Instead, they emphasized national and regional development over values-based considerations.
Kenya’s former Foreign Minister Raychelle Omamo, for example, said that the Global South places greater emphasis on development and posed a question: Who can better support their countries in areas such as infrastructure development, personnel training and public health, instead of applying labels to certain countries. Obviously, many countries in the Global South still prioritize digital development over geopolitical coordination.
It is also worth noting that this year the conference, which traditionally focuses on defense issues, featured many sub-forums on AI’s potential military applications. The discussions demonstrated AI applications in the Russia-Ukraine conflict and explored ways to increase the efficiency and accuracy of intelligence through AI technology. In addition, the MSC Report said that AI weapon systems with limited (or zero) human oversight also raise questions regarding accountability for the potential war crimes such systems could commit. But effective discussions on these questions were notably absent from the conference.
The report also pointed out that the limited logic of geopolitical competition was obvious in the technological field. There is a moral imperative for international cooperation on AI regulation, it said, adding that states worldwide must look for areas where positive-sum tech cooperation may still be possible. Unfortunately, as an important platform for communication between transatlantic and “like-minded” countries, the MSC failed to break away from the bloc-based (or alliance-oriented) mindset, placing the security of small groups before the common interests of mankind. It tended to politicize and weaponize tech issues, attempting to define friends and foes and fragment the world. This approach threatens to cause disruptions to global industrial and supply chains, ultimately leaving the world on a more unstable footing.
In a main-stage event titled “Augmented Rivalry: Geopolitics and the Race for AI,” moderator Ian Bremmer, chairman of Eurasia Group, talked about uncertainties brought by technology.
“Technology flows freely. Scientific research needs to be shared and open-sourced,” he said.
Indeed, knowledge knows no boundaries. Attempts to build walls, engineer confrontations and impose sanctions to prevent technological exchanges and cooperation will only increase barriers and chances of miscalculations, adding more destabilizing factors to the world. Moreover, such moves cannot address risks and challenges that emerging technologies bring to the world. In this context, with the continuous development and rapid iteration of AI technology, countries everywhere need not only to delicately balance tech development and tech regulation but also navigate the fine line between inevitable competition and indispensable cooperation. In dealing with these issues, dialogue always proves more fruitful than confrontation, and cooperation definitely trumps competition.