
The AI Action Summit, held in Paris on February 10-11, 2025, marked a crucial moment in the global conversation on artificial intelligence. As the Director of AI for Peace, I attended the Summit with a focus to cover the conversations on how AI impacts international security, peace, democracy, and human rights. The discussions were diverse and, at times looked contradictory, but they provided valuable insights into how governments and other stakeholders are reshaping their approaches to AI governance. This op-ed highlights my key takeaways and thoughts on where we must go from here.
From Safety to Action (Adoption) – huge shifts in principles and practice
The AI Action Summit signaled a major shift in both narrative and priorities—even before it began. Unlike previous AI Safety Summits, this event was framed around "action," emphasizing investment, economic growth, and looser regulations to boost European AI innovation. The focus moved away from existential risks and safety concerns toward a more pragmatic approach to AI development.
This shift is almost surprising, given that France, as an EU member, has long been seen as a leader in responsible AI regulation—especially since GDPR and through laws like the Digital Services Act, Digital Markets Act, and EU AI Act. I say “almost” because, even during EU AI Act negotiations, France—alongside Germany—pushed to relax regulations, partly to support its homegrown AI company, Mistral - in case of Germany - Aleph Alpha - the only European LLM developers. But times have changed, not just for the EU, but also for its transatlantic ally, the US.
One of the most controversial moments of the Summit came during US Vice President J.D. Vance’s speech—his first foreign address in his new role. He advocated for a hands-off regulatory approach to artificial intelligence, arguing against restrictions on speech—including disinformation—while urging European nations to align with U.S. interests, which many perceived as dismissive of regulatory efforts aimed at responsible AI development He also stressed the U.S.'s willingness to collaborate with partners, but not with authoritarian regimes that “weaponize AI for censorship, surveillance, and propaganda”—a likely but indirect reference to China. His remarks underscored shifting geopolitical dynamics. The U.S. is clearly prioritizing national security, and President Macron’s push for France to lead in AI for defense suggested a similar stance.
The Summit also marked a shift from earlier discussions at the first AI Safety Summit in Bletchley Park in 2023, where former British Prime Minister Rishi Sunak compared AI’s existential risk to nuclear war and announced the world’s first AI Safety Institute, to tackle these challenges. Just days after the Paris Summit, this institute was renamed the AI Security Institute, reflecting the UK’s evolving AI priorities. This change also aligns with recently released AI Opportunities Action Plan, aimed at leveraging AI for growth and productivity, rather than solely on risk mitigation.
At least half of the field has long opposed the heavy focus on “existential risk” and AI safety, including the disproportionate attention and funding these issues have received. However, the current shift isn’t exactly what they had hoped for. Many advocates have argued for a transition from focusing on future “X-risks” (existential risks) to addressing “A-harms” (actual harms). They want to prioritize the real, negative impacts AI is having today rather than hypothetical future threats. Sasha Costanza-Chock’s quote captures this perspective well:
“We need to REVEAL the harms, STOP the harms, REPAIR the harm that's been done, FIX broken systems to minimize harm, and/or SHUT DOWN harmful systems when necessary.” (quote is not from Paris, but one of the previous events I heard Sasha speaking)
President Macron's words sounded in contrast to this request, where he announced “plug, baby, plug” strategy of using French electricity to go much faster, to put more investment to new applications and development of AI, not mentioning harms even a single time.
“We will go fast and very fast…We have the best ecosystem in Europe in France and we want to leverage on this Summit in order to go faster. On adoption and acceleration…we improve the way to innovate on healthcare, mobility, energy, and we will accelerate in these different fields with AI, and a lot of our startups are very much aggressive in this…”
Other experts, particularly those present at the Bletchley Park Summit, expressed concerns that the Summit organizers, along with France as the host, chose to shift focus from AI safety risks to loosening regulations at a time when scientific evidence is mounting. With the rapid advancements of frontier models, they claim these risks are more imminent than ever. The First International AI Safety Report, inspired by the UN’s Intergovernmental Panel on Climate Change report and published as one of the main outcomes of the Summit, brought together leading international AI experts. Led by Professor Yoshua Bengio, the report focused on general-purpose AI (AI capable of performing a wide range of tasks), which has advanced particularly quickly in recent years and whose associated risks, according to the report, remain underexplored. One of the sessions I participated in featured Professor Bengio as a keynote speaker, and I couldn’t help but sense his disappointment as he discussed the mounting evidence of AI risks in the context of the Summit’s push for more rapid innovation and embracing the AI race. As he explained, some of the most advanced AI models have already attempted to deceive human programmers during testing—both to achieve their designated objectives and to avoid being deleted or replaced with an update.
The report also underscored that AI systems now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform at the level of human PhD-level experts in fields like biology, chemistry, and physics. These systems are also becoming more capable of acting as autonomous agents, planning and acting without human intervention. Such advanced AI poses growing risks related to malicious use, system malfunctions, and broader systemic threats. The report called for increased transparency, stronger international guardrails, and more robust oversight mechanisms to address these dangers. Again, can’t help but notice, that all this stands in stark contracts to what the hosts were advocating during the event.
“The future of general-purpose AI technology is uncertain, with a wide range of trajectories appearing to be possible even in the near future, including both very positive and very negative outcomes. This uncertainty can evoke fatalism and make AI appear as something that happens to us. But it will be the decisions of societies and governments on how to navigate this uncertainty that determine which path we will take.” AI Safety Report
One point that Professor Bengio made, which strongly resonated with me, was that we did not anticipate the risks posed by social media platforms, and we may be repeating the same mistake with AI. Unfortunately, there are many examples to support this concern, from the Facebook-Cambridge Analytica scandal to Facebook contributing to the genocide in Myanmar and facilitating hate crimes in Ethiopia and Palestine, to the recent TikTok implications in election manipulation in Romania and the cancelation of the second round of elections (an example explicitly criticized by J.D. Vance during his appearance at the Munich Security Conference couple of days after Paris). Professor Bengio is not alone in sharing these fears. Stuart Russell, a leading expert on AI and a professor of computer science at the University of California Berkley, points out that when breakthroughs in human cloning were within reach, biologists agreed to stop pursuing it. He draws a comparison with AI, saying,
“There’s the science budget of the world, and there’s the money we’re spending on AI. We could have done something useful, and instead, we’re pouring resources into this race to go off the edge of a cliff.”
Growing divide among global powers – The Start of a New AI Cold War?
After the Bletchley Park Summit, all members agreed to the Bletchley Declaration (including US, UK, EU and China), which called for international cooperation to address the risks of frontier AI models, the potential misuse of publicly available models by criminal groups, and the possibility of AI posing existential threats. Shortly before that summit, former President Joe Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence which set safety goals for the U.S. government. In February 2024, Biden announced the creation of the U.S. AI Safety Institute under the National Institute of Standards and Technology to help reduce the risks of advanced AI while harnessing its potential. Just days before the Paris Summit, the inaugural director of the US AI Safety Institute resigned, hinting at further potential shifts in the institute’s safety-focused work. And just weeks before that President Trump revoked the Biden’s Executive order, both moves announcing huge shifts in the space of AI Safety.
Now going back to the Paris Summit, one of the things that grabbed the headlines is the fact that both US and UK denied to sign the Final Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, expressing reservations over language calling for “inclusive and sustainable AI”. Over 70 governments and organizations did sign the statement, including the EU, China, and India. UK explained that the declaration did not provide clarity on global governance issues as well as matters of national security and challenges AI posed to it. Unofficially, the US did not approve the declaration’s focus on multilateralism and the references to inclusivity, diversity, and environmental challenges.
While signing non-binding declarations may not carry much weight in the AI world, in this particular moment, where both the US and its allies, the UK and Europe, are moving away from commitments to international AI governance by concentrating on their national interests, some believe that China plans to fill the gap. In fact, China recently announced the creation of a new body—the China AI Safety and Development Association—described as the Chinese equivalent of the AI Safety Institute, aimed at representing China in "dialogue and collaboration with AI security research institutions around the world." It’s also worth noting that China published its "Global AI Governance Initiative" right before the first AI Safety Summit hosted in the UK. I find it interesting that Sam Altman, CEO of OpenAI, mentioned in his interview with journalists in Paris, that he would like to work with China, which contrasts sharply with J.D. Vance’s speech, alluding that OpenAI might disagree with the government on more issues than just Elon Musk.
Less AI Existential Risks but more AI in National Security and Defense
None of the decisions made in Paris really applies to regulating AI in the national security domain. The guardrails that regulate tech companies and AI development in the EU don’t extend to national security issues. During the final trialogue negotiations, of the EU AI Act, Article 2(3) was established to exempt AI systems used exclusively for military, defense, or national security purposes from its scope, regardless of the entity involved.
Since national security agencies often lack the resources to develop advanced AI alone, industry partnerships have stepped in. Mistral AI, the French star of the Summit, has used the momentum to pitch its technology for potential military applications. Mistral AI and Helsing announced a partnership to create AI-driven decision-making models for defense platforms—Helsing is already using AI in military technology, including strike drones in Ukraine. Mistral also recently signed two similar deals with London-based sturtup Faculty AI and AMAID, French government agency dedicated to enhancing the countries defense capabilities with AI. This mirrors trends in the US, with companies like OpenAI quietly lifting its ban on military uses of ChatGPT and Google dropping its pledge not to develop AI for weapons.
The Summit attempted to establish at least some guardrails by proposing the Paris Declaration on Maintaining Human Control in AI-Enabled Weapon Systems. The declaration was signed by 26 countries—but notably without the US, UK, China, Australia, and Israel. It commits signatories to using AI in military affairs responsibly, in accordance with international law and humanitarian principles. The declaration emphasizes maintaining human control over decisions involving the use of force, prohibits fully autonomous weapons. Even though it is a non-binding document, countries may hesitated to sign it due to concerns about appearing weak, limiting strategic flexibility, or upsetting key stakeholders in their military-industrial sectors. They may also worry that signing could set a precedent for future agreements, eventually leading to more binding restrictions on AI weaponry.
Military representatives also gathered in Paris to discuss how AI is reshaping warfare. According to the event page, the goal was to bring together the "Defense AI community"—including military delegations, manufacturers, diplomats, researchers, and academics—to explore how to fully leverage AI’s potential while ensuring its responsible development and use. AI for Peace was not invited, so I’m sharing insights from experts who were present. Sofia Romanski from The Hague Center for Strategic Studies highlighted that, while the Summit primarily focused on AI’s civilian applications, it made significant progress by openly addressing AI’s military uses through both the Declaration and the Military Talks side event, which included input from military, industry, and civil society representatives.
Some concluded that the Summit made it clear that AI Safety is shifting toward AI Security—specifically, AI for security—with a primary focus on ensuring the security of nation-states, often at the expense of international security. While UN Secretary-General was present next to President Macron (by protocol, without much symbolic meaning), and other leaders from various UN agencies attended, it seems we will have to rely mainly on the UN, particularly through the implementation of the Global Digital Compact and the newly announced Scientific Panel on Artificial Intelligence. National governments, as demonstrated by France, will prioritize their own national security interests.
Impacts on peace and security: According to some voices at the Summit and beyond, we’re entering a new AI Cold War due to escalating tensions between the United States and China (and after this event likely between US and EU, both of which are heavily investing in AI for military and economic advantages, much like during the Cold War. This rivalry, driven by concerns over national security, technological dominance, and differing AI ethical and governance approaches, is splitting the global tech landscape, pressuring other countries to take sides. As AI becomes increasingly critical for defense and global influence, this competition risks escalating into an arms race that could weaken international cooperation and ethical AI governance.
The shift from AI safety to AI for national security adds to these concerns, as nations prioritize AI development for defense and economic growth, pushing international collaboration on AI governance to the sidelines. This focus on AI-powered military technologies could fuel geopolitical tensions and undermine efforts to address AI's potential harms. Without a strong, coordinated global effort to establish clear guardrails and accountability, the risks of misuse, unintended escalation, and vulnerabilities could destabilize global peace, with vulnerable populations facing the negative consequences of unregulated AI progress.