Skip to content

US thwarts AI bot farm used by Russian actors to spread propaganda


Published:
Cybersecurity

In a significant move to counter disinformation, the US Department of Justice announced on July 9 that it had dismantled a Russian AI-powered bot farm aimed at spreading propaganda. Nearly 1,000 social media accounts, allegedly posing as US residents, were seized. These accounts were used to support the invasion of Ukraine and sway public opinion in favor of Russia. This operation marks a crucial step in combating state-sponsored disinformation campaigns and protecting democratic processes. Read more in this week’s edition of Cyber Security Insights.

The US Department of Justice seized Russian AI powered accounts

The US Department of Justice seized two domain names and searched 968 X accounts used by Russia’s AI-powered bot farm, disrupting a state-sponsored AI-enabled propaganda campaign. “As Moscow continues its brutal war in Ukraine and threatens democracies around the world, the Department of Justice will continue to use all its legal powers to counter Russian aggression and protect the American people.” US Attorney General Merrick B. Garland said. FBI Director Christopher Wray called the seizures “Today’s action marks the first attempt to stop a Russian-backed AI-powered bot farm.”

The Meliorator AI Bot Farm was Responsible for Propaganda Posts

Russian state-sponsored media organization RT is reported to have organized and run the scheme since 2022 using AI-enabled bot farm generation and management software called Meliorator. Meliorator has only been confirmed to be used on X, but it is likely that it can be expanded for use on other social media platforms. Russian RT affiliates used Meliorator to create fake social media accounts pretending to be real people, spreading false information via X about countries like the US, Poland, Germany, the Netherlands, Spain, Ukraine and Israel.

Disinformation on social media raises huge concern, for example, filter bubbles that occur when algorithms on social media platforms recommend content that is similar to what users have already liked or shared. This can lead to users being separated from alternative viewpoints isolating them in their own cultural or ideological bubbles. It is believed that the Russian perpetrators were aiming to create a filter bubble by churning out propaganda in its favor, makes it challenging to see opinions critical of Russia, and to create a strong support base.

The AI Bot Farm is Part of Russia’s Propaganda Campaign

The Russian campaign has been relentless since the invasion of Ukraine in 2022, with the aim of spreading anti-Ukraine and pro-Russian narratives. A press release from the US Department of Justice gives the following examples of fake accounts created by bot farms:

Figure 1 Example of fake accounts created by bot farms

A social media account claiming to belong to a US voter (left) posts a video of President Putin discussing his belief that certain areas of Poland, Ukraine, Lithuania were “gifts” from the Russian military that liberated people from Nazi control during World War II.”

Figure 2 Example of Propaganda from a Fake Account Created by a Bot Farm

A social media account claiming to belong to a US resident (left) posted a video (middle) claiming that the number of foreign fighters embedded with Ukrainian forces is significantly lower than estimated. The same account also posted a video (right) in which President Putin claims that the invasion of Ukraine is not a territorial issue.

FBI, Canada, Netherlands issue joint cybersecurity advisory to shut down Meliorator

In connection with the US Justice Department’s announcement the FBI and US Cyber Command’s Cyber National Task Force (CNMF) issued a joint cybersecurity advisory in partnership with the Canadian and Dutch governments. The advisory details Meliorator’s technology and aims to alert social media companies to stop the Russian government from exploiting the technology. X responded by voluntarily freezing the fake accounts identified by the courts and used in the bot farm.

Figure 3 Joint Advisory on Meliorator

Platform companies are now required by law to combat disinformation

In 2022, the EU adopted the Digital Services Act (DSA), which amongst other key requirements obliges platform companies to prevent misuse of their systems to mitigate disinformation or election manipulation. However the following year the European Union (EU) published a report on the Digital Services Act application of the risk management framework to Russian disinformation campaigns calling out social media companies as enabling the Kremlin to run large scale campaigns. The spread of Russian propaganda, which had long been viewed as a problem in the EU, intensified after the invasion of Ukraine. Especially at X, hate speech and misinformation believed to be coming from Russia is reported to have increased rapidly after the acquisition by Elon Musk.

In Japan as well, the Study Group on Platform Services of the Ministry of Internal Affairs and Communications published a summary on platform companies in February 2024, calling for faster responses to deletion of harmful information and for the transparency of operation guidelines. Based on this summary policy, a law was enacted to revise the Provider Liability Limitation Law to the Information Sharing Platform Countermeasures Law, which went into effect on May 17. In the Information Sharing Platform Countermeasures Law, new regulations were established that oblige large platform companies to respond to requests for deletion within a certain period of time and to formulate and publish deletion standards.

Summary

For many years, Russia has been conducting various disinformation campaigns around the world. Amid deteriorating relations with the West, Russia is using AI to expand these activities in an attempt to undermine Ukraine’s support and manipulate public opinion in its favor. The EU has enacted the Digital Services Act (DSA) in the fight against misinformation and dissemination of illegal content online which while Japan has enacted the Information Sharing Platform Act to require platform companies to protect users and delete harmful information quickly. However, its effectiveness remains to be seen.

About our Cyber Security Insights

This blog post is part of our The Cyber Security Insights, that are released several times every month, providing invaluable insights into the evolving threat landscape. Crafted by NTT Security Japan Inc. Consulting Services
Department’s OSINT Monitoring Team and NTT Security Sweden’s Incident Response Team, our content includes expert analysis on recent breaches, vulnerabilities, and cyber events. Stay ahead of the curve with our timely updates and actionable intelligence, ensuring your digital assets remain secure in an ever-changing environment.

Read more Cyber Security Insights here.

Sources:

Want to know more about how we can help you with your cybersecurity?

Book a meeting with NTT Security experts to learn more about our advisory services and penetration testing. We help you protect sensitive data while ensuring privacy and convenience.