Katherine Keneally and Julia Ebner answer questions at the Calleva-Airey Neave Global Security Seminar Series hosted by Oxford University’s Blavatnik School of Government on May 1, 2025.

Q&A with Katherine Keneally: The Future of Terrorism Detection and Analysis

Editor’s Note

This conversation is part of the Calleva-Airey Neave Global Security Seminar Series at the University of Oxford.

In recent years, the threat of terrorism has become more diffuse, unpredictable, and digitally-enabled. No longer confined to organized networks or ideological movements, today’s violent actors often emerge from fringe online subcultures, driven by conspiracy theories, nihilism, or grievance-fueled worldviews.

As intelligence agencies confront this shifting landscape, traditional frameworks for detecting and preventing violence are being tested—and, in many cases, found wanting. From the rise in youth radicalization to the exploitation of AI and social media algorithms by extremist groups, the barriers to early detection are growing more complex.

How should we understand the evolving nature of terrorism, and what will it take to build more adaptive and effective threat assessment tools?

To explore these questions, Julia Ebner sat down with Katherine Keneally, a leading expert on domestic terrorism and targeted violence prevention. This conversation is part of the Calleva-Airey Neave Global Security Seminar Series at the University of Oxford.

How does the intelligence community assess threats of terrorism and political violence? What is the role of leakage?

In its simplest form, threat assessments typically are conducted using a variety of intelligence sources, which may include information obtained through open source intelligence (OSINT) to human sources (HUMINT) and signals intelligence (SIGINT), among many other types. This information is then used to assess the level of threat, looking at factors such as interest in and/or history of engaging in violence or other illegal activities, affiliation with a terrorist group or ideology, and access to weaponry – to name a few.

Leakage (information shared intentionally or unintentionally about a person’s intent to commit an act prior to doing so) can play a significant role in not only making threat assessments, but also preventing political violence. Signals of an actor’s intent to commit violence or other activities may be “leaked” prior to an action through posts to social media, direct messages to others online, or verbal statements made to a family member or friend. For example – in the recent bombing at a fertility clinic in California, the suspect posted numerous times online about bombmaking and plans to commit suicide, among other concerning content – but it does not appear that these leakages were reported to authorities or others.  This is why reporting concerning activities to law enforcement is so important – we have seen that these reports have prevented attacks and saved lives. Additionally, this information can be used to help inform threat assessments.

How has the terrorism threat landscape evolved in the past few years? What are emerging trends in the motivations of terrorists and what does this mean for detection? 

In recent years, we have continued to see a shift away from defined ideological movements and groups. While terrorist and domestic violent extremist groups and networks still pose a threat, we have seen significant acts of violence (mass shootings) that appear to be ideologically motivated, but upon investigation, find that the motivations for the violence are unclear – they may be influenced by a mix of conspiracy theories, have no ties to any specific ideological cause, or be loosely tied to one or more movements or belief systems. One of the ways this has been presented is in the recent trend of nihilistic violence. In recent years, we have seen attacks that are primarily driven by a misanthropic worldview, rather than an ideological motivation (nihilistic violence) – including at least nine total school shootings and disrupted school shooting plots tied to subcultures of nihilistic violence in the United States. 

Let’s move on to tactics. What are the key trends in recruitment, communication and mobilization tactics? 

A couple of notable trends come to mind. The first is the targeting of youth, particularly minors, by threat actors both online and offline. Subcultures of nihilistic violence frequently target vulnerable young people through discussion groups related to mental health or self-help, such as eating disorder communities. At times using mainstream social media platforms, we see these subcultures of nihilistic violence lure vulnerable users into other alt-tech platforms where they are manipulated and extorted. Additionally, we have recently tracked various white nationalist groups in the United States that have established youth wings within their organizations, where they are attempting to recruit predominately young white males into their groups, hidden by the façade of claims that they simply seek to promote a healthy, strong community – or “brotherhood.”

The second is the use of mainstream narratives and content to radicalize others. Often, when we think of ideologically-motivated groups and movements, we think of them using overt Nazi imagery or ISIS propaganda to radicalize and recruit people. This still happens, but we are also seeing more strategic attempts at radicalizing the mainstream public. For example, last year, we identified a network of Telegram channels operated by the neo-Nazi accelerationist Terrorgram Collective, that were intended to appear as “news” channels, but were actually part of a coordinated effort to infiltrate mainstream spaces to radicalize others.

What are the most significant challenges the intelligence community is facing with this new threat landscape? 

 There are quite a few challenges that the Intelligence Community (IC) is facing that are making it more difficult to detect threats and make accurate threat assessments. First, as we previously discussed regarding the evolution of the landscape – perpetrators of political violence do not always “fit” in any particular ideological box. While terrorist organizations and ideologies are still very much a threat, we are also seeing individuals commit violence motivated by a variety of beliefs or unclear ideologies rather than an affiliation with any defined group or movement. This can make it more difficult to assess a threat. Additionally, when we look at where threats are prevalent online, there is no shortage of online platforms and websites that can be used by threat actors, ranging from mainstream platforms to lesser known, and typically even less moderated, alt-tech platforms. This presents a challenge for the IC not only in terms of intelligence collection, but also some tech platforms are reticent or do not have the mechanisms to provide information to law enforcement when needed. While there are many other challenges, these two are quite significant.

Why do we see a rise in radicalization cases among the youngest generations, including among minors? What are common tactics used to groom young people? 

While the targeting of youth by terrorist and domestic violent extremist groups for radicalization and recruitment is not new, social media and the broader online ecosystem is making it easier to access minors. Studies over the last few years indicate that most teens are on at least one social media platform – with some research suggesting up to 95% of youth between 13 – 17 are on social media. While many platforms do not “allow” users under the age of 13 to have an account, research also indicates that much younger children, even those as young as 8, are on these platforms. Unfortunately, what we are seeing is that threat actors recognize this and are using these platforms to target youth. Actors may use a variety of tactics depending on the group/movement, such as posting content that features language, images, and memes that are appealing to youth, direct messaging vulnerable young users on mainstream social media platforms (and then directing them to other, less moderated platforms), and even sextortion.

How do the algorithms of tech platforms contribute to radicalization, and how is AI (e.g., LLMs and deep fakes) exploited by violent extremist movements? 

 By design, social media platforms seek to maximize the amount of time users are engaged on their platforms and algorithms are the backbone of this. To do this, algorithms promote content that seeks to hold users’ attention – this is typically content that is personalized and often extreme or sensational in order to elicit a reaction/engagement (likes, shares, comments, or views). As more content is recommended, it becomes more extreme and polarized, putting users into their individual echo chambers where they are being fed worse and worse content. 

Threat actors are often some of the first to test and use new technologies for their activities. AI has been no different. With the emergence of generative AI, we have seen movements attempt to use these tools to aid the creation of online content and propaganda, ranging from the creation of images featuring mass shooters made to look like cartoon characters, to more violent videos and imagery. While the creation and manipulation of content is not new, it does enable actors to – in some cases – develop and share content more quickly and make it appear realistic.

What are possible solutions to these challenges? What is the future of terrorism threat detection and prevention? 

 Unfortunately, there is no simple solution to these challenges. Contributions from various sectors of society are needed – policy, government, non-governmental organizations, law enforcement, social media platforms, public health, and more. To adequately address this changing landscape and prevent political violence, these efforts need to go beyond just threat detection. They need to include wider violence prevention efforts, including those that address risk factors of radicalization such as mental health issues, digital illiteracy, and social isolation.

As we continue to see the political violence landscape evolve, the field of terrorism threat detection and prevention needs to evolve with it. For example, the definitions and group/ideology typologies that are being used to identify and prevent threats are not always applicable to the types of threats we are seeing. As a result, opportunities to identify radicalization prior to an attack have been missed (the Southport case is an example of that). The existing mechanisms in place to assess threats need to adjust for this evolving landscape – otherwise, threats are missed. Additionally, the field is going to have to adjust to the speed at which these threats are appearing. The implementation of new technologies, including generative AI, and the evolving nature of the online ecosystem have the potential to accelerate the radicalization process and mobilization to violence. Within subcultures of nihilistic violence, we have observed how the lack of ideological drivers can potentially decrease the amount of time separating the “flash-to-bang” process (radicalization to violence). This means that there is a shorter window for detection and intervention.

Filed Under

, , , , , , , ,
Send A Letter To The Editor

DON'T MISS A THING. Stay up to date with Just Security curated newsletters: