Leaders from government, international organizations, and academia headed to Paris this month for the AI Action Summit, where they engaged in important discussions on how AI can prioritize public interest. Key conversations centered around providing independent and reliable AI access, developing more environmentally friendly technologies, and promoting effective global governance.
The summit week included nearly 100 events worldwide from Feb. 6-11, 2025, including an international conference on AI and society and a discussion series on AI and culture.
Several Stanford HAI faculty and fellows participated in these discussions. HAI Co-Director Fei-Fei Li gave the summit keynote address — emphasizing the need for human-centered AI governed by science-based, pragmatic policies — while senior fellows Erik Brynjolfsson and Rob Reich, HAI Executive Director Russell Wald, and HAI Policy Fellow Riana Pfefferkorn joined additional summit week events and discussions.
In this conversation, they share their insights from the event.
The summit was framed as actions around AI. What did you hear that had real energy behind it?
Reich: The French AI Action Summit heralded a major shift in AI governance away from AI risk and safety to AI opportunity and national self-interest. Three powerful tributaries merged to create this new current. First came France’s surge of AI ambition, determined to prove that Europe could do more than merely regulate California’s tech giants. Next flowed the open-source movement, successfully steering discourse away from closed ecosystems and toward a vision of distributed innovation. Finally, the breakthrough of China’s DeepSeek — achieving frontier-level performance at a fraction of the cost — swept in like a flash flood, intensifying the focus on geopolitical competition. Europe, watching this confluence of forces, was eager to ride the rising tide.
The summit’s tone was captured perfectly in Vice President JD Vance’s opening declaration: “I’m not here this morning to talk about AI safety, which was the title of the conference a year ago. I’m here to talk about AI opportunity.” This shift reverberated beyond Paris — within days, the UK had rebranded its “AI Safety Institute” as the “AI Security Institute.”
This demotion of safety concerns to an afterthought comes at a particularly troubling moment. As AI capabilities accelerate at a dizzying pace, industry leaders like Anthropic’s Dario Amodei, OpenAI’s Sam Altman, and DeepMind’s Demis Hassabis warn that radically transformative AI looms on the near horizon.
Concerns about AI consciousness and free will remain science fiction rather than science, yet dismissing safety concerns in pursuit of accelerated development betrays a dangerous myopia. The notion that safety advocacy merely serves as corporate protectionism collapses in the face of DeepSeek’s success. Rather than choosing between innovation and safety, we need both.
In this context, the summit’s bright spot — the announcement of ROOST (Repository of Robust Open Online Safety Tools) under Camille François’s leadership — stands out precisely because it dared to bridge this artificial divide. That this common-sense initiative combining openness with safety appeared radical amid the summit’s innovation-at-all-costs atmosphere speaks volumes. The diplomatic failure that followed — with neither the US nor UK signing even the summit’s watered-down final statement — only underscores our fraught technological and geopolitical moment. What could have been a crucial opportunity to advance global AI governance instead became a showcase of national technological ambitions, leaving the harder work of ensuring safe AI innovation for another day.
History teaches us that sometimes it takes a catastrophic flood to spur communities to build proper defenses — but nothing requires us to learn safety’s lessons the hard way, if only we can find the collective will to act.
At this summit, we heard the first public speech focused on AI by the Trump Administration, delivered by Vice President Vance. Do you anticipate any changes to AI policy, in the U.S. or more broadly?
Wald: The AI Action Summit in Paris marked a shift in U.S. policy, ultimately a very different perspective from previous years with American innovation and technology leadership as the priority. The U.S. will most likely adopt a more laissez-faire approach to AI regulation, while Europe may become more open to innovation despite its more restrictive regulatory stance.
What do global governments need to get right to better govern AI?
Li: The Paris AI Summit is quite an energizing summit because it’s so obvious that global leaders, from heads of states to policymakers and ministers, are recognizing this AI moment. And first to start with the humility to self educate is very important. As my AI governance piece in the Financial Times noted, we should lead with science, not science fiction. And in order to respect the science, we need to understand the facts, and that starts with education. Global governments need to invest in the time, effort, and people to understand what this technology is, and support that ecosystem. That ecosystem includes energy, the public sector, academia, as well as private industry and investments.
And last but not the least, governments must be pragmatic. Governments, by nature, can sometimes speak ideologically. Even at the summit, we saw different ideologies clash. But technology is a double-edged sword. AI, if used right, can help so many people across sectors, from medicine to agriculture, from education to manufacturing. The best approach is to create guardrails yet protect the innovation through pragmatic policies.
The AI summit has numerous side events and conversations. What themes emerged from these conversations?
Pfefferkorn: One of the key themes that emerged from the side events around the AI Action Summit was the growing depth and diversity of research in AI safety. Unlike broad discussions on AI governance, the conversations around safety demonstrated a serious engagement with present-day harms, moving beyond hypothetical risks. The conference organized by the International Association for Safe and Ethical Artificial Intelligence (IASEAI) and Humane Intelligence showcased a more grounded and urgent approach to addressing real-world challenges posed by AI systems.
Another major theme was the role of different AI safety institutes worldwide and the question of whether the U.S. will maintain its leadership in this space. Some expressed concerns about “anticipatory disappointment” — the risk that bold commitments to safety might be watered down in favor of more incremental or symbolic actions. At the same time, there was a vibrant and growing community actively working on AI safety and ethics from multiple angles, moving past abstract discussions toward tangible interventions.
Funding patterns were also a crucial topic. Many discussions highlighted the need to ensure AI safety funding is distributed across a broad range of research areas and not overly concentrated on generative AI. Initiatives like the launch of ROOST underscored the importance of diversifying investment in safety-focused research and interventions, ensuring that emerging risks are met with the necessary resources.
What was the role of Stanford HAI at the AI Action Summit?
Li: I was the keynote speaker for the opening, which was an honor. My colleague Russell and I also spent quite a bit of time meeting colleagues, policymakers, industry leaders and AI researchers in the ecosystem. I caught up with AI friends from Google to Meta to other universities, but I also had bilateral meetings with ministers of technology or information from various countries, including Singapore, America, the UK and France. We were invited to a dinner by the president of France, Emmanuel Macron, and attended an event in the American Embassy, where we heard Lynn Parker (Principal Deputy Director of the White House Office of Science and Technology Policy) talking about America’s opportunity in this innovation era.
Wald: HAI had a strong presence at this year’s summit. Besides Fei-Fei’s keynote, HAI Senior Fellows Erik Brynjolfsson and Rob Reich were actively involved at various meetings throughout the summit. Erik participated in events covering the economic impacts of AI, while Rob took part in a panel discussion on AI and the Future of Democracy: Challenges and Opportunities. We received gracious support from HAI Affiliate member AXA, who co-hosted a reception the evening before the Summit. This event featured speakers like Alexander Vollert, COO & CEO of AXA Group Operations; Anne Bouverot, Special Envoy for the Paris AI Action Summit; and Josephine Teo, Singapore’s Minister for Digital Development and Information. It convened leaders from industry, policy, and civil society.
Stanford’s representation at the Summit highlighted HAI’s leadership, faculty, and fellows, further solidifying HAI as a global brand and a key driver of Stanford’s AI objectives worldwide.