OpenAI and Meta Report Prevalent Use of Generative AI in  Political Disinformation

OpenAI and Meta Report Prevalent Use of Generative AI in  Political Disinformation

Days ahead of the US  2024 general election, the community was bombarded with information concerning the election outcome. 

Even though it’s still uncertain, who will be elected as the 47 presidents of the US giant tech firms Meta and OpenAI have raised concerns about the use of AI in spreading of political rumours.

Meta and OpenAI complained that criminals are taking advantage of their platforms to spread misleading information. The two companies stated that criminals residing from China, Russia, Iran and Israel were interfering with the ongoing election campaigns by misleading the public.

AI Tools Used in Spreading Rumors

In a recent report, the Meta team noted that AI-generated content was commonly used in campaigns.The Meta official lamented that plans to bring down criminals sharing fake information on US campaigns have proven futile. The social media platform noted that the criminals used AI-generated images and political deepfakes to mislead the public. 

The Meta team stated that after thoroughly assessing the fake content, the bad actor relied on AI to create images and posts. Upon contacting the OpenAI, the tech company inquired about the preventive measures taken to address the spread of fake news.

The OpenAI team confirmed collaboration with various companies to enhance the safety and security of its AI models. 

The tech company confirmed that it has leveraged the power of AI in the detection of suspicious activities. As criminals explore ways to take advantage of vulnerable systems, OpenAI has collaborated with its partners to prevent bad actors from accessing customers’ information. 

With the ongoing development within the OpenAI, the tech company was pleased to state that the defense mechanism has proved successful. The tech company said that OpenAI advanced tools have recently rejected commands issued by criminals.

AI Used in Political Campaigns

The OpenAI confirmed banning several accounts sharing questionable campaign posts and has reported the matter to the law enforcement unit. The OpenAI pledges to support law enforcement in investigating individuals or businesses spreading rumours about US elections.

The tech company described the criminal activities as deceptive attempts to interfere with public opinion or political outcomes without revealing their intention. 

The OpenAI noted that the criminals attempted to conceal their true identity. Besides analyzing the damage caused by spreading fake news, OpenAI intends to evaluate the impact of misleading information on the impending election. 

The company plans to develop preventive measures to address the spread of inappropriate information. From their assessment, the OpenAI observed that most of the fake campaigns reached the target audience. 

Criminals Leverage AI Tools in Conducting Crime

The tech company complained that none of the actors spreading the misleading information had been identified. 

The probing team noted that more than five fake political campaigns were developed using the OpenAI tools and shared across various social media platforms such as X Facebook, Instagram and Telegram.

Subsequently, the Meta team reported several AI-generated campaigns that seemed inappropriate. The Meta team noted that some of the campaigns were generated by a Russian illicit group, “Bad Grammar”. 

These campaigns focused on the Russian invasion of Ukraine, current political events and other topics. The Meta noted that the “Bad Grammar” created several comments in multiple languages to reach a larger audience in Russia, Ukraine, the US and other European countries.

The Meta and OpenAI team noted that another actor dubbed Doppelganger used ChatGPT to create articles and social media websites portraying Russia as a law-abiding country while belittling Ukraine and the US. 

The Doppelganger content aimed at increasing engagements and activities on platforms such as 9GAG. The tech company noted that Doppelganger attempted to create images of politicians from Western countries, but OpenAI blocked the commands. 

Another threat group mentioned in Meta’s “Adversarial Threat Report” was the Israeli private company STOIC, which was running a Zero Zeno campaign. The STOIC generated comments using OpenAI tools to mislead individuals from Europe and North America.

Zero Zeno posted comments on the Gaza conflict on social media platforms. The Meta team noted that the bad actors used different tactics to pass the message to the target group. offers high-quality content catering to crypto enthusiasts. We’re dedicated to providing a platform for crypto companies to enhance their brand exposure. Please note that cryptocurrencies and digital tokens are highly volatile. It’s essential to conduct thorough research before making any investment decisions. Some of the posts on this website may be guest posts or paid posts not authored by our team, and their views do not necessarily represent the views of this website. is not responsible for the content, accuracy, quality, advertising, products, or any other content posted on the site.

Kenneth Eisenberg
About Author

Kenneth Eisenberg

Kenneth Eisenberg, a formidable voice in crypto journalism, crafts insightful pieces on blockchain's ever-evolving landscape. Merging deep knowledge with articulate prose, Kenneth's articles cut through the noise, offering readers clear, in-depth perspectives. As the digital currency world grows, Kenneth remains a beacon of expertise and clarity.

Leave a Reply

Your email address will not be published. Required fields are marked *