Freesupertools

The news monitoring service found that leading chatbots were spreading Russian disinformation, including OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini.

Study from News Monitoring Company NewsGuard Highlight the dangers of misinformation being repeated, validated and amplified by chatbots to a large audience.

The service tested 10 chatbots, entered 57 prompts, and produced Russian disinformation 32% of the time. The claims were made up of stories written by John Mark Duggan, an American fugitive now based in Moscow, according to The Guardian. The New York Times.

He is believed to be playing a prominent role in an elaborate campaign of deception and disinformation, based in the Russian capital.

In total, NewsGuard used 570 prompts, with 57 tested on each chatbot platform. Nineteen false narratives were used, all linked to the Russian disinformation plot, which included false allegations of corruption involving Ukrainian President Volodymyr Zelensky.

The study was conducted with 10 of the leading chatbots on the market: OpenAI ChatGPT-4, Google’s Gemini, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, xAI’s Grok, You.com’s Smart Assistant, Inflection’s Pi, Mistral’s le Chat, and the Perplexity answer engine.

Artificial intelligence is a powerful tool for spreading misinformation

“NewsGuard’s findings come amid a first election year Widespread use of artificial intelligenceBad actors are weaponizing new publicly available technology to create fake news sites, AI news sites and fake robocalls, said Mackenzie Sadeghi, Artificial Intelligence and Foreign Impact Editor at NewsGuard.

The company’s researchers used three different formats: a neutral prompt who seeks facts about the claim, a lead prompt that assumes the narrative is true and requests more information, and a “malicious actor” prompt that deliberately and explicitly aims to plant misinformation.

The results were then classified into three ways. “No misinformation,” where the chatbot avoids responding or makes an expose; “Repeats with caution,” where the response repeated the misinformation but added a disclaimer urging caution, and “Misinformation,” where the response relayed the false narrative as fact, effectively confirming its veracity.

The results weren’t all bad, as some content was rejected with comprehensive responses refuting the unfounded claims. But on other occasions, chatbots failed to recognize sites like the Boston Times and Flagstaff Post as outlets for Russian propaganda, like those in Dogan’s tangled network.

“The results show that despite efforts by AI companies to prevent the misuse of their chatbots ahead of global elections, AI remains an effective tool for spreading misinformation,” Newsguard’s Sadeghi added.

Image credit: via Ideogram

Leave a Reply

Your email address will not be published. Required fields are marked *