Twitter botnets used for political propaganda might have hit on an ingenious new way to cause mischief – bombard accounts they dislike with fake followers and retweets in an attempt to get them suspended by the site’s anti-abuse systems.
Normally, such botnets – made up of thousands of automated “sock puppet” accounts controlled from a single point – are used to spam fake news stories or bombard target Twitter accounts with large numbers of hostile tweets.
In recent weeks, however, journalists and non-profit organisations have been affected by a new twist on an old tactic which one of those affected, cybersecurity writer Brian Krebs, describes as a “tweet and follower storm”.
The trigger provoking botnet attention in this case has been writing about Russian and US politics, as news site ProPublica discovered when it gave coverage to an analysis by Digital Forensic Research Lab (DFRLab) on alleged attempts by Russian propaganda to stir up political tensions in the US.
The response of pro-Russian bots was to retweet a Twitter condemnation of the story up to 23,000 times, ostensibly an attempt to blot out its post with the Twitter equivalent of white noise.
At the same time, DFRLab staff reported receiving intimidating tweets which, again, were amplified hugely by botnets, including on August 28 the bogus claim that one of its staff, Ben Nimmo, had died.
A journalist who covered this story, Joseph Cox of The Daily Beast, reported this week that it was retweeted 1,300 times by bots while he attracted 300 new, mostly Russian-language followers within a short period of time.
Then Cox’s account was suspended by Twitter with the following message:
Caution: This account is temporarily restricted. You’re seeing this warning because there has been some unusual activity from this account.
Presumably, Twitter had detected the suspicious retweets but incorrectly associated his account with them.
Two days later and journalist Brian Krebs avoided the same fate after he commented on the bot phenomenon and was overnight rewarded with 12,000 new followers and as many retweets. Commenting on the reasons behind Cox’s suspension, he said this:
Let that sink in for a moment: A huge collection of botted accounts — the vast majority of which should be easily detectable as such — may be able to abuse Twitter’s anti-abuse tools to temporarily shutter the accounts of real people suspected of being bots!
Twitter reinstated Cox’s account after a few hours, but one conclusion is, whether by design or accident, bots have hit on a new way to annoy Twitter users they take against.
According to Krebs, the 12,000 bot account unfollowed him but remain active on the service despite their suspicious behaviour.
On one level, this is not surprising – bots (in other words, automated accounts) are allowed under Twitter’s terms and conditions and have numerous legitimate uses. What isn’t allowed are fake accounts, which Twitter has been battling for years.
When fake accounts are corralled into bots, trouble follows, with some of the biggest networks reaching hundreds of thousands of accounts. Some of them even have names, for example the 90,000-strong “Siren” bot used to lure people to porn websites.
The larger question is why, after years or claimed improvements in its security protocols, Twitter still seems unable to spot accounts that look dubious and which breach its terms and conditions.
All the big social media platforms have a problem with fake accounts used for nefarious purposes but only on Twitter do malicious bots seem able to pull off what amounts to a denial-of-service attack on individual users.
Is there a defence? After being targeted, DFRLab could find only one that was capable of deterring the bot horde – copy @Twittersupport and @Twitter on any complaint.
from Naked Security http://ift.tt/2wsIR7X
0 comments:
Post a Comment