World

New tool — Russia and China are using OpenAI tools to spread disinformation Iran and Israel have been getting in on the action as well.

Hannah Murphy, Financial Times – May 31, 2024 1:47 pm UTC Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis “more effective.”FT montage/NurPhoto via Getty Images reader comments 11

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAIs policies prohibit the use of its models to deceive or mislead others.

The content focused on issues including Russias invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments, OpenAI said in the report.

The networks also used AI to enhance their own productivity, applying it to tasks such as debugging code or doing research into public social media activity, it said.

Social media platforms, including Meta and Googles YouTube, have sought to clamp down on the proliferation of disinformation campaigns in the wake of Donald Trumps 2016 win in the US presidential election when investigators found evidence that a Russian troll farm had sought to manipulate the vote.

Pressure is mounting on fast-growing AI companies such as OpenAI, as rapid advances in their technology mean it is cheaper and easier than ever for disinformation perpetrators to create realistic deepfakes and manipulate media and then spread that content in an automated fashion. Advertisement

As about 2 billion people head to the polls this year, policymakers have urged the companies to introduce and enforce appropriate guardrails.

Ben Nimmo, principal investigator for intelligence and investigations at OpenAI, said on a call with reporters that the campaigns did not appear to have meaningfully boosted their engagement or reach as a result of using OpenAIs models.

But, he added, this is not the time for complacency. History shows that influence operations which spent years failing to get anywhere can suddenly break out if nobodys looking for them.

Microsoft-backed OpenAI said it was committed to uncovering such disinformation campaigns and was building its own AI-powered tools to make detection and analysis more effective. It added its safety systems already made it difficult for the perpetrators to operate, with its models refusing in multiple instances to generate the text or images asked for.

In the report, OpenAI revealed several well-known state-affiliated disinformation actors had been using its tools. These included a Russian operation, Doppelganger, which was first discovered in 2022 and typically attempts to undermine support for Ukraine, and a Chinese network known as Spamouflage, which pushes Beijings interests abroad. Both campaigns used its models to generate text or comment in multiple languages before posting on platforms such as Elon Musks X.

It flagged a previously unreported Russian operation, dubbed Bad Grammar, saying it used OpenAI models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on messaging platform Telegram.

X and Telegram have been approached for comment.

It also said it had thwarted a pro-Israel disinformation-for-hire effort, allegedly run by a Tel Aviv-based political campaign management business called STOIC, which used its models to generate articles and comments on X and across Metas Instagram and Facebook.

Meta on Wednesday released a report stating it removed the STOIC content. The accounts linked to these operations were terminated by OpenAI.

Additional reporting by Cristina Criddle

2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way. reader comments 11 Advertisement Channel Ars Technica ← Previous story Related Stories Today on Ars

Articles You May Like

Jets scream overhead and soldiers drop down into gun turrets – outside Russia’s military base in Syria
Cardiorespiratory Fitness Preserves Brain Health As You Age, Study Finds
Deep learning revolutionizes COPD diagnosis with single CT scan, reducing patient burden
Trump gushes over ‘handsome’ Prince William
Google Says it Has Cracked a Quantum Computing Challenge with New Chip