Extremists across the United States have weaponized artificial intelligence tools to help them spread hate speech more efficiently, recruit new members, and radicalize supporters online at unprecedented speed and scale, according to a new report. New report From the Middle East Media Research Institute (MEMRI), an American non-profit journalism monitoring organization.
The report found that AI-generated content is now a mainstay of extremists’ output: they are developing their own extremist-infused AI models, and are already experimenting with new ways to leverage the technology, including producing blueprints for 3D weapons and recipes for making them. Bombs.
Researchers at the Domestic Terrorism Threat Observatory, a group within the institute that specifically tracks U.S.-based extremists, lay out in stark detail the scale and scope of AI use among domestic actors, including neo-Nazis, white supremacists, and counterterrorists. Government extremists
“There was a little hesitation initially about this technology and we saw a lot of discussion and debate among them [extremists] “On the Internet about whether this technology could be used for their purposes,” Simon Bordeaux, MEMRI’s director of domestic terrorism threat monitoring, told reporters at a news conference earlier this week. “In the last few years, we’ve gone from seeing AI content occasionally to AI being a big part of hate propaganda content online, especially when it comes to video and visual propaganda. So, as this technology develops, we’ll see extremists using it more.” ».
As the US election approaches, the Purdue team is tracking a number of troubling developments in extremists’ use of AI technology, including the widespread adoption of AI video tools.
“The biggest trend we’ve noticed [in 2024] “This is the rise of video,” Bordo says. “Last year, AI-generated video content was very basic. This year, with the launch of OpenAI SoraAnd other video production or manipulation platforms, we have seen extremists use them as a means of producing video content. We’ve seen a lot of excitement around this as well, with a lot of people talking about how it could allow them to produce feature films.
Extremists have already used the technology to create videos showing President Joe Biden using racial slurs during his speech and actress Emma Watson reading aloud. My struggle While he was wearing a Nazi uniform.
last year, WIRED reported On how extremists linked to Hamas and Hezbollah are leveraging generative AI tools to undermine the hash engagement database that allows major tech platforms to quickly remove terrorist content in a coordinated manner, and there is currently no available solution to this problem
Adam Hadley, executive director of Technology Against Terrorism, says he and his colleagues have already archived tens of thousands of AI-generated images created by right-wing extremists.
“This technology is used in two basic ways,” Hadley tells WIRED. “First, generative AI is being used to create and manage bots that run fake accounts, and second, just as generative AI is revolutionizing productivity, it is also being used to create text, images, and videos through open source tools. Both uses demonstrate the significant risk that Producing terrorist and violent content and disseminating it on a large scale.