top of page

The AI-Peace Nexus at Davos 2025

Writer's picture: AI for PeaceAI for Peace

This edition of Is AI Coming in Peace delves into insights from Davos 2025, the World Economic Forum Annual Meeting in Switzerland, where global leaders, experts, and innovators meet every year to address pressing global challenges. I look into how (and if) AI intersected with peace, exploring its potential to influence peace, security and conflict in this year's discussions. As an expert in AI for Peace, writing from a peacebuilding and technology perspective rather than a military one, this report inevitably reflects my personal biases and is limited to the sessions I was able to attend. If you feel there's a critical angle or session missing here, I welcome your comments and insights!

 

I want to begin with the session titled Rubik’s Cube of Global Security”, a name that immediately captured attention. Many found this analogy of a Rubik’s Cube appropriate, as it illustrates the complexity of today’s security challenges, because solving the cube requires navigating through an enormous number of possible configurations, and out of these, only one configuration is the "correct solution". Dr. Comfort Ero, President and CEO of the International Crisis Group (ICG), painted a dark picture of the current security landscape, describing it as more dire than it has been in decades. Alexander Stubb, President of Finland, echoed this sentiment, framing the present moment as a historic inflection point comparable to 1918, 1944, or 1989—times of profound geopolitical shifts.


The discussion highlighted the troubling trajectory from hopes of centralized peacemaking to the reality of escalating conflicts. Dr. Ero pointed out that the ICG's list of "Ten Conflicts to Watch" has increasingly been shaped by great-power rivalries and the globalized nature of disputes, with shifts in U.S. policy, from Biden to Trump, adding further unpredictability. She emphasized the paradigm shift in international relations, marked by growing lawlessness and a return to sovereignty-focused policies. Among her top concerns were the future of deterrence structures, the risks of nuclear proliferation, and the erosion of conflict prevention strategies. She made no mention of AI or technology-facilitated threats, which leaves me uncertain whether to interpret this as a positive or negative sign, given how evident such threats are to many.


On the other side, in what was described as "the" session of Davos 2025, The Dawn of Artificial General Intelligence? panelists explored the profound implications, including those in safety and security, of machines reaching human-level intelligence. Moderated by one of the top journalists in the field of technology, Nicholas Thompson, Chief Executive Officer, The Atlantic (I highly recommend you follow his work on linked in and his newsletter – where he also reported from Davos) as a heated debate between AI pioneers Andrew Ng and Yoshua Bengio, the discussion touched on risks, ethics, and transformative potentials of artificial general intelligence (AGI). Among the points raised, a comment by Jonathan Ross, CEO of Groq, stood out: he associated the potential role of AGI to nuclear weapons, claiming that nuclear deterrence has historically brought more peace than any other technology. For those of us in the peacebuilding field, such statements are deeply unsettling. While nuclear deterrence theory suggests that mutual destruction reduces the likelihood of direct conflict between nuclear-armed states, it rests on the assumption of rational actors—an assumption challenged by many volatile leaders today. Moreover, nuclear weapons do little to prevent non-nuclear conflicts or address internal violence, as evidenced by previous and current wars in Vietnam, Iraq, Syria, Yemen, Sudan, and Myanmar. In fact, Armed Conflict Location and Event Data (ACLED) recently reported a 25% increase in political violence incidents over the past 12 months. Long-running conflicts are finding new opportunities to persist rather than incentives to conclude. A significant factor behind the doubling of conflict rates since 2020 is the worsening of these protracted conflicts rather than their resolution. This resurgence has made peace agreements, negotiations, and ceasefires increasingly rare. And all of these wars, especially seen in the latest war in Ukraine, are attracting more attention to nuclear weapons, not less. As rightfully pointed in another session by Rafael Mariano Grossi, Director-General, International Atomic Energy Agency (IAEA), imagine that the ten worst conflicts today all have nuclear weapons, that would certainly make the situation much worse.


The notion that nuclear weapons might inherently foster peace ignores the critical distinction between negative peace and positive peace, as articulated by Johan Galtung, the “father” of peace studies. Negative peace refers to the mere absence of violence, while positive peace entails the presence of justice, equality, and human rights. While nuclear deterrence might, in some cases, prevent direct military conflict between major powers, it fails to address the root causes of conflict, such as poverty, inequality, and political repression. Furthermore, investing heavily in military technologies often diverts resources away from the elements that build positive peace, creating a vicious cycle where the lack of justice and equity inevitably fuels further conflict. Lasting peace requires addressing these structural injustices, not merely relying on the threat of catastrophic destruction to maintain peace and security.


My final hope to hear something at the intersection of peace and AI, was at the session “Peace Through Strength”, hosted by the Centre for Regions, Trade and Geopolitics. However, their perception of peace remains closely tied to war, arguing that strengthening military capabilities will provide an advantage in conflict, which they believe will ultimately lead not to more wars but to peace. The discussions centered on the rising military spending in Western and Central Europe, which reached €350 billion in 2024—its highest since the Cold War. The President of Finland emphasized the era of technological warfare, highlighting the advancements of Ukraine and Russia in cyber and space capabilities, and framing comprehensive defense as not just about land, sea, and air but also raw technological innovation.


While most speakers directly linked "strength" with military power, Sviatlana Tsikhanouskaya, Leader of the Democratic Forces of Belarus, offered a contrasting view. She underscored the need to invest not just in weapons but in societies fighting for democracy and against tyranny. Drawing from Belarus's struggles, she highlighted the importance of supporting autonomous media, civil society, and the agency of democratic forces to counter propaganda and defend national identity. Her stance served as a reminder that weapons are ineffective without the bravery and resilience of the people using them.

 

All in all, there was very little recognition of the complexities surrounding the intersections of technology, AI, and peace, particularly the negative impacts AI can have on sustaining peace. The distinctions between current available AI tools on one side, and generative AI (GenAI) and potential achievement of artificial general intelligence (AGI) and their unique risks were largely overlooked, as were concrete steps to mitigate these dangers. Discussions remained heavily skewed toward military applications, with little to no focus on how AI could be harnessed positively to protect peace rather than wage war. Expecting meaningful conversations on the constructive use of AI for peace still feels like wishful thinking. As a community advocating for AI for Peace, we have much more to do in ensuring these crucial messages reach the world’s leaders and policymakers.



 
 

Comments


bottom of page