Charting a Collective Response to Violent Extremist Use of AI
This year, practitioners, policymakers, and community organizations working to prevent and counter violent extremism (P/CVE) have been grappling with one common question: how do we respond to the rapidly evolving threats posed by terrorist and violent extremist use of AI?
Over the last several months, our team has had the opportunity to reflect on this question alongside P/CVE practitioners, online safety experts, and technologists who are thinking deeply about the societal implications of AI. From global summits to research convenings, concern about new risks posed by AI is palpable and there is growing alignment on the need to respond quickly and avoid repeating painful lessons learned about online harms in the advent of social media.
At the Eradicate Hate Global Summit, experts raised alarm about the use of AI in producing terrorist propaganda, facilitating recruitment, and providing instructional support for attack planning. Presentations at September’s Trust and Safety Research Conference echoed these concerns, with researchers presenting early findings on what we know about how violent extremists are currently using AI tools.
Similarly, participants at Public Safety Canada’s Megaweek underscored how AI is reshaping digital media ecosystems – posing threats to the quality of our information environments and social cohesion. Meanwhile, the Responsible Tech Summit facilitated deep discussions about the rise of AI companions and the potential for these tools to reinforce grievances and weaken social relationships – known risk factors that may to radicalization to violent extremism.
These discussions reflect a growing awareness of three central risks. First, AI is changing the nature of terrorist and violent extremist content (TVEC) online by enabling propaganda production, localization, translation, and campaign design at greater speed and with the potential for greater efficacy. The launch of AI tools enabling multimodal content creation has raised new questions about the speed and scale at which synthetic TVEC could be created and spread. The convincing and visceral nature of AI-generated videos adds a new layer of complexity to preventing the spread of TVEC and its resulting harms.
Second, there is risk that AI may be used to facilitate attack planning. In the aftermath of the Pirkkala stabbing attack in May 2025, researchers speculated that the perpetrator may have used generative AI to prepare for the attack. Moreover, AI’s ability to reproduce and communicate highly technical or restricted information into accessible formats is a central concern for those focussed on frontier mitigations – particularly when it comes to protecting against the use of AI to facilitate development of chemical, biological, radiological, and nuclear weapons.
Lastly, AI also has the potential to accelerate or introduce new pathways to radicalization to violence. People across the globe are increasingly turning to AI for companionship and consult with general purpose AI tools on deeply personal life decisions. Without appropriate safeguards, these technologies have the capacity to validate hate or even condone violence.
Across the P/CVE and trust and safety communities, there’s a clear call for solutions that address these emerging risks. The challenge now is to move with urgency, but also with care – to build responses that are grounded in evidence, uphold human rights, and promote a free, open, and secure internet.
When the conversation turns to solutions, it’s clear that we need to build better bridges between subject matter experts, civil society, tech companies, legislators, and regulators that enable coordinated, evidence-based, and dynamic responses.
The Christchurch Call to Action provides a framework specifically designed to address challenges like this using a multistakeholder approach.
At previous Leaders’ Summits, Christchurch Call Leaders identified responding to TVEC risks in new tech sectors as a priority – particularly in the wake of generative AI. Over the coming years, we’ll work to create processes that enable much needed exchanges and work between stakeholders on this issue.
Supported by funding from the Community Resilience Fund at Public Safety Canada, our new project - called Elevate - will engage the Call Community to assess needs and identify opportunities to develop solutions, tools, and resources that address terrorism and violent extremism risks in new tech sectors – with an initial focus on AI.
This work advances specific Christchurch Call Commitments related to preventing the dissemination of TVEC, developing technical solutions to detect and remove TVEC, and coordinating cross-industry efforts for knowledge-sharing.
What’s next?
The first phase of Elevate will focus on assessing needs and driving experimentation. In subsequent phases of work, our team will establish processes for continuously innovating, refining, and maintaining tools and resources that support TVEC prevention and help platforms respond to emerging threats. Concurrently we will also focus on supporting smaller and medium-sized platforms to respond to TVEC through the development of open-source safety tools in partnership with ROOST.
We are now scoping activities that build on emerging research led by Christchurch Call partners. We look forward to engaging with our multistakeholder community on this topic and creating shared solutions that drive impact. To get in touch with our team and learn more about Elevate, please reach out to us at: info@christchurchcall.org.