#BotSpot: The Intimidators

Twitter bots unleashed in a social media disruption tactic

#BotSpot: The Intimidators

Share this story
THE FOCUS

Twitter bots unleashed in a social media disruption tactic

Overnight from August 28 to August 29, a major Twitter botnet opened a new front in its ongoing attempts to intimidate @DFRLab, creating fake accounts to impersonate and attack our team members.

The impersonator posts were amplified by thousands of automated “bots”. So, too, was a post from the Atlantic Council, the non-partisan think tank of which @DFRLab is a part; so were the Twitter accounts of @DFRLab staff; so were the accounts of unrelated users, who posted using key words. Tens of thousands of automated accounts were deployed for this operation, in what was apparently meant as a show of force.

These incidents took the bots to a level of harassment and intimidation we have not seen trained on @DFRLab before. However, they also allowed us to conclude that the initial botnet involved was either run by, or commissioned by, pro-Russian individuals. The algorithm driving a second and larger botnet proved simple enough to identify, and subvert.

You can view the prelude to this episode, and how @DFRLab managed to draw the bot managers’ fire, here.

Mock and shock

Overnight from August 28 to 29, two long-dormant Twitter accounts were repurposed with the images of Ben Nimmo, Senior Fellow with the Atlantic Council’s @DFRLab (the lead author of this and the earlier articles), and @DFRLab Director Maks Czuperski. Each revived account had a little over 1,000 followers.

The first was seemingly meant in mockery, reversing the Nimmo’s image, claiming the screen name of “Veniamin Nimovitch” and rewriting the bio to insert, as a location, the Kremlin.

Left, the author’s actual Twitter bio. Right, the fake account.

The account appears to have been created as a parody, rather than an attempt at deceit:

Within a few hours, its screen name had changed to “Norm Gomez” and the bio was removed, but the tweets remained.

The posts from @al5d, renamed “Norm Gomez”. Archived on August 29, 2017.

The second account was darker:

https://twitter.com/MaxCzuperski/status/902513971449790464

Upper image: The tweet from the fake Czuperski account. Lower image: Screenshot of the tweet. Twitter deleted the fake account for impersonation shortly after the screenshot was taken; however, the post was archived on August 29, 2017.

On this occasion, the fake account exactly copied the avatar, biography, and background of the genuine Maks Czuperski, the only difference being the spelling of the handle. This was clearly an attempt to shock and deceive, rather than mock:

Left, the genuine account. Right, the doppelganger, since deleted, but archived on August 29, 2017.

The most popular “Nimovitch” tweet was retweeted over 5,000 times. The fake Czuperski tweet was retweeted over 21,000 times. In both cases, bots were the main source of amplification.

Social media cyber attack

Simultaneously, the Atlantic Council’s main Twitter feed received sustained bot attention. This consisted of tens of thousands of accounts retweeting a post in which the Council recommended @DFRLab’s research. The vast majority of accounts were faceless, without an avatar picture or background; as we have described elsewhere, this is a classic symptom of a bot network.

As the below timeline shows, the attack began late on August 27, peaked on August 28, and continued into August 29.

Timeline of retweets of the Atlantic Council post, from a machine scan. The left-hand axis indicates the number of tweets per minute.

On the face of it, the use of a botnet to amplify a post which exposes that botnet’s work appears counterintuitive. However, none of the bots had any significant following. Thus the massive retweeting did not spread to genuine Twitter users. Instead, the main effect was to bombard the Twitter feeds of the accounts mentioned in the post with an endless series of notifications:

Three of the many notifications triggered by the botnet in the author’s feed.

This is the social media version of a Distributed Denial of Service (DDoS) cyber-attack. In a classic DDoS attack, hackers use hijacked computers to flood a website with thousands or millions of queries, overloading them, and shutting them down. On this occasion, the attack was carried out by apparently hijacked accounts, and appeared designed to intimidate and disrupt the Atlantic Council’s work and social media promotion.

Attributing the attacks to pro-Russian sources

However, the episode also gave us a further insight into the way the bots were used, and therefore suggesting their possible motivation and affiliation.

In total, the episodes covered in our recent reporting, which triggered this attention, involved seven major bot interventions:

1. The retweeting of a post by Russia analyst Julia Davis, involving some 7,000 accounts:

2. The initial tweet attacking ProPublica and @DFRLab on August 24, amplified by some 23,000 accounts (now only available in archive);

Screenshot of the first attack on @ProPublica and @DFRLab, from the archive.

3. The follow-up tweet attacking ProPublica and @DFRLab, amplified by some 20,000 accounts;

https://twitter.com/yoiyakujimin/status/901062342208782336

4. The retweeting of the post by @AtlanticCouncil, by over 108,000 accounts;

5. The retweeting of the “Nimovitch” account;

6. The retweeting of the fake “Max Czuperski” account;

7. The following, by thousands of bots, of @DFRLab staff.

Five of those demonstrably involved the same botnet, which deployed with increasing aggression. The evidence for this conclusion is set out below.

“We see you.”

The increasing aggression suggests very strongly that the botnet’s current purpose is to intimidate users by a combination of hostile posting and massive bot-driven retweeting. In other words, the botnet served as a blunt message of presence.

In general, botnets are hard to attribute with absolute certainty through open sources. They are made up of large numbers of effectively anonymous accounts, which routinely move in a flock and misstate their identity and location; the only clue to their purpose lies in their posts.

The primary use of the bots in this network appears commercial. Each accont tweets in multiple languages on a wide variety of themes, giving them the appearance of a botnet hired out to those who want to amplify specific content. Their foray into political posts is therefore out of character, and may indicate either that the bot user chose to politicize them, or that someone else paid for them to perform a specific function.

Sample of posts from the bot account @KDpdX3QORYWWt5b (“Bernadette White”), showing tweets in Turkish, German and Japanese.

The identity of the person or group, who chose to politicize these accounts, is not clear. Moreover, the spamming of @DFRLab and ProPublica followed articles dealing both with Russian bots and far-right bots in the United States. Either group could have launched, or commissioned, the attack.

However, the post by Julia Davis dealt exclusively with Russia, and its artillery attacks on Ukraine:

Far-right activists or bot herders in the United States are less likely to have an interest in intimidating a researcher posting about Russia shelling Ukraine; however, pro-Kremlin bot herders would have every interest in so doing.

The motives, therefore, suggest that most likely the botnet is controlled by, or was commissioned by, pro-Kremlin individuals in order to intimidate those who research Kremlin warfare and propaganda.

Proving the same network

In each of the five botnet deployments listed above, many of the fake accounts had very specific — indeed unmistakable — features in common.

Across all five instances, repeatedly, accounts with different screen names and handles, but identical avatar pictures, retweeted or liked the relevant posts.

Thus Julia Davis’ tweet was recently liked by an account called @UfBC6EsjqbUAX9D (“Andrea Lewis”). The avatar image is exactly the same as that of @KDpdX3QORYWWt5b (“Bernadette White”), which retweeted the first attack on ProPublica, and @YsNQVpcq1grSuXF (“Carolyn Wright”), which retweeted the fake news of Ben Nimmo’s death.

Left to right, “Andrea Lewis”, “Carolyn Wright” and “Bernadette White”.

Davis’ tweet was also liked by @wT4Mvqah8J78wwo (“Lisa Tucker”). This has the same avatar image as @TomMondy (“Tom Mondy”), which shared the second attack on ProPublica; and @K72MYE2c2tTJBnZ (“Alison Quinn”), which shared the “Nimovitch” tweet.

Left to right, “Tom Mondy”, “Lisa Tucker” and “Alison Quin’.

Another account to retweet the fake Czuperski tweet was @Keith_Beckwith (“Keith Beckwith”). Despite the male name, it had the same (female) avatar image as @Vu7L3tvbjNDYf64 (“Diana Hill”), which retweeted Davis’ post; and @HannahHochsted2 (“Hannah Hochstedler”), which retweeted the first attack on ProPublica.

Left to right, “Hannah Hochstedler”, “Diana Hill” and “Keith Beckwith”.

Time and again, these same images, and many similar ones, crop up in different accounts which amplified the five key posts. This is diagnostic: it can only realistically indicate a single botnet, using the same images with different names in an attempt to pass unnoticed.

When bots go wild

The retweeting of the Atlantic Council post followed a different pattern. The great majority of the accounts in this case were faceless, as these screenshots from a machine scan show:

These accounts were not marked by alphanumeric handles. Their handles and usernames appeared to match, and many were created years ago, such as these three:

Left to right, the profiles of Erasmo Lima dos Santos, Phoenix Perez and Ben Langley, members of the network.

These are members of a different network; however, it is still a network, with all the accounts sharing similar or identical posts repeatedly, including a variety of commercials in different languages, and these stories from the New Delhi Times:

Shared post by three of the many accounts to amplify the Atlantic Council.

The above post does concern Russia, but this is not diagnostic; it was adjacent to another share of an article on the China-Pakistan Economic Corridor (CPEC), and its implications for India:

Shared post by three of the many accounts to amplify the Atlantic Council.

This network appears, again, to be commercial, considerably larger (over 100,000 members) but much less sophisticated, without the avatar images which might afford it some degree of camouflage. It appears to have been deployed, again, for intimidation.

But its activities were not limited to the Atlantic Council’s post. As word of the incident spread, NATO Spokeswoman Oana Lungescu tweeted about it.

The tweet was rapidly retweeted by a slew of faceless accounts, as this screenshot of the retweets list shows:

Among them were many which had also retweeted the Atlantic Council post, including @KeriDanielle

@zjsgrant (screen name “Jakc Grant”)…

… and @reiymarcaylie (screen name “rachelle”).

This bot reaction was unsophisticated and easy to detect, not least because of the high proportion of faceless accounts. It was also indiscriminate, spreading to users not involved in the original posts.

We considered that the bots had probably been programmed to react to a relatively simple set of triggers, most likely the words “bot attack” and the @DFRLab handle. To test the hypothesis, we posted a tweet mentioning the same words, and were retweeted over 500 times in nine minutes — something which, admittedly, does not occur regularly with our human followers:

The experience was so novel that we repeated the test, and were rewarded with hundreds more retweets and likes.

Some of the users who replied to such tweets were also targeted, regardless of the language. The final number of retweets and likes was almost identical in each case, reinforcing the impression of a network.

https://twitter.com/Alexey__Kovalev/status/902669590399995904

Not all attempts to trigger the algorithm worked, indicating that some other factor was also at play:

https://twitter.com/dmarusic/status/902669256394977281

However, the pattern was sufficiently clear that it appeared worth bringing it to Twitter’s attention as a case of automated harassment. So, we posted another tweet using the trigger words, but also including the handle for @TwitterSupport. If it triggered the algorithm, this should result in many of the bots in the network tweeting directly to Support, thus making it much easier for Twitter to identify them. At the least, this would deliver to Twitter a substantial list of the identities of the bots in the network; it could also lead to action against the network.

The botnet obliged. By the morning of August 30, over 50,000 bot accounts had tweeted @TwitterSupport. They were still functioning, suggesting that Twitter had not reacted, or that they had been filtered before reaching Support; however, this reinforces the conclusion that the bots were programmed to react to the words “bot”, “attack” and “@DFRLab”.

Conclusion

The efforts by @DFRLab and ProPublica to expose these botnets have drawn an increasingly strident and heavy-handed retaliation. By doing so, they have also exposed the sheer scale of the botnets which are available for intimidation.

The Atlantic Council post alone was retweeted by over 100,000 mostly faceless accounts in a coordinated network. The more sophisticated bots, with avatar images and names, constituted tens of thousands more. This represents a massive potential for abuse and aggression.

Given the role of the more complex accounts in spamming Julia Davis’ tweet, it is likely that they were controlled or commissioned by pro-Russian users, at least for the duration of the attack. The faceless network does not have a clear identification, but was clearly used for aggressive purposes.

Each botnet, however, had a weakness. The more sophisticated proved the linkage between the accounts very visibly, by re-using the same photos time and again. The larger one was run by such a simple algorithm that it could be encouraged to turn itself in to Twitter.

As such, these efforts at intimidation backfired. Botnets are most effective when they are undetected. Using them to grotesquely inflate likes and retweets on specific posts is an effective way to make sure they are detected.


Follow along for more in-depth analysis from our #DigitalSherlocks.