Social media manipulation as a political tool is spreading

By | 6:10 AM Leave a Comment

Social media manipulation is getting worse: as more governments use it to manipulate public opinion, it’s becoming a rising threat to democracy, according to a new report from the Oxford Internet Institute.

There’s nothing new about political parties and governments using propaganda, but the new normal includes toxic messaging that’s easy to spread on a global scale with the brawny new tools for targeting and amplification, they said.

According to the University of Oxford’s Computational Propaganda Research Project, the use of algorithms, automation, and big data to shape public opinion – i.e. computational propaganda – is becoming “a pervasive and ubiquitous part of everyday life.”

For its third annual report, the project examined what it calls “cyber troop” activity in 70 countries. Cyber troops is the collective term for government or political party actors that use social media to manipulate public opinion, harass dissidents, attack political opponents or spread polarizing messages meant to divide societies, among other things.

Over the past two years, there’s been a 150% increase in the number of countries using social media to launch manipulation campaigns, the project found.

The use of computational propaganda to shape public attitudes via social media has become mainstream, extending far beyond the actions of a few bad actors. In an information environment characterized by high volumes of information and limited levels of user attention and trust, the tools and techniques of computational propaganda are becoming a common – and arguably essential – part of digital campaigning and public diplomacy.

What accounts for the growth?

Part of the growth can be attributed to observers getting more sophisticated when it comes to identifying and reporting such manipulation campaigns, given digital tools and a more precise vocabulary to describe the cyber troop activity they uncover, the researchers said.

The researchers say that some of the growth also comes from countries new to social media that are experimenting with the tools and techniques of computational propaganda during elections or as a new tool of information control.

Their favorite online platforms

The researchers found evidence that 56 countries are running cyber troop campaigns on Facebook. That makes it once again the No. 1 platform for such activity, the researchers found, due to its market size – it’s one of the world’s largest social network platforms – as well as its reach, with the ability to influence not only target audiences, but also their networks, including close family and friends. Facebook also works well as a propaganda tool due to its dissemination of political news and information, and the ability to form groups and pages.

In response to media inquiries about the report, Facebook said that showing users accurate information is a “major priority” for the company. From a spokesperson:

We’ve developed smarter tools, greater transparency, and stronger partnerships to better identify emerging threats, stop bad actors, and reduce the spread of misinformation on Facebook, Instagram and WhatsApp.

Over the past year, the project has also seen cyber troop activity growing on image- and video-sharing platforms such as Instagram and YouTube, as well as on WhatsApp. The researchers believe that in the next few years, political communications will grow on these visual platforms.

Samantha Bradshaw, one of the report’s authors, told Reuters that on platforms like these, users are seeing fake news that’s delivered in quick, easily digestible hits that don’t strain the brain:

On Instagram and YouTube it’s about the evolving nature of fake news – now there are fewer text-based websites sharing articles and it’s more about video with quick, consumable content.

It’s difficult to police visual content

Bradshaw said that the move to visual content as a propaganda tool will make it tougher for platforms to automatically identify and delete this kind of material. Unfortunately, we can’t rely on users to report even horrific videos, let alone visual content that’s merely misleading or biased.

The Christchurch, New Zealand terrorist attack in March is an example of what type of material can flow freely on social media. Facebook said in a statement that it took 29 minutes and thousands of views before it was finally reported and ultimately removed.

During that time, the video was repeatedly shared and uploaded across even more platforms.

Bradshaw:

It’s easier to automatically analyze words than it is an image. And images are often more powerful than words with more potential to go viral.

Strategies, tools, techniques

Over the past three years, the researchers have been tracking the use of three types of fake accounts used in computational propaganda campaigns: bot, human, and cyborg. Bots, highly automated accounts designed to mimic human behavior online, are often used to amplify narratives or drown out political dissent, they said. They found evidence of bot accounts being used in 50 of the 70 countries they tracked.

They found that humans are behind even more fake accounts, though. Such accounts engage in conversations by posting comments or tweets, or by private messaging people. These accounts were found in 60 out of the 70 countries covered in this year’s report. The third type of fake account, cyborg accounts, is a hybrid that blends automation with human curation.

This year, the project added a fourth type of fake account: hacked or stolen ones. They’re not fake, per se, but high-profile accounts with a wide reach are attractive to hijackers. Such accounts are used strategically to spread pro-government propaganda or to censor freedom of speech by revoking access to the account by its rightful owner, the researchers say.

Some key findings from the report:

  • 87% of countries use human-controlled accounts
  • 80% of countries use bot accounts
  • 11% of countries use cyborg accounts
  • 7% of countries use hacked/stolen accounts
  • 71% of these accounts spread pro-government or pro-party propaganda
  • 89% attack the opposition or mount smear campaigns
  • 34% spread polarizing messages designed to drive divisions within society
  • 75% of countries used disinformation and media manipulation to mislead users
  • 68% of countries use state-sponsored trolling to target political dissidents, the opposition or journalists
  • 73% amplify messages and content by flooding hashtags

As far as communication strategies go, the most common is disinformation or manipulated media – a more nuanced term for what we’ve been referring to as fake news. The report found that in 52 out of the 70 examined countries, cyber propagandists cooked up memes, videos, fake news websites or manipulated media in order to mislead users. In order to target specific communities with the disinformation, they’d buy ads on social media.

Trolling, doxxing and harassment are also a growing problem. In 2018, 27 countries were using state-sponsored trolls to attack political opponents or activists via social media. This year, it’s up to 47 countries.

Other tools of repression include censorship through the mass-reporting of content or accounts.

What to do?

It’s a tough nut to crack. The report doesn’t mention how you might spot, block or ignore manipulation, but it does say that we can’t blame social media for what’s happening. Democracy was starting to fall apart before social media blossomed, the researchers said:

Many of the issues at the heart of computational propaganda – polarization, distrust or the decline of democracy – have existed long before social media and even the Internet itself. The co-option of social media technologies should cause concern for democracies around the world – but so should many of the long-standing challenges facing democratic societies.

For strong democracies to flourish, we need “access to high-quality information and an ability for citizens to come together to debate, discuss, deliberate, empathize, and make concessions,” the researchers assert. In these times, we currently turn to social media to stay current. But are the platforms up to the task?

Are social media platforms really creating a space for public deliberation and democracy? Or are they amplifying content that keeps citizens addicted, disinformed, and angry?

Start them young

While the Oxford University researchers didn’t delve into methods to spot fake news, others are working on it. For example, in June 2019, Google launched an initiative to help train kids to spot fake news.

The lesson plans are designed to keep kids safe and to be better online citizens, teaching them how to scrutinize emails and text messages to try and spot phishers, how to respond to suspicious messages to verify the sender’s identity, and other techniques that come in handy at shielding people from the mental warfare of cyber troops: how to spot and interact with chatbots, how to use criteria like motive and expertise to establish credibility when evaluating sources, spotting fake URLs and evaluating headlines.

Google’s initiative – part of its Be Internet Awesome initiative – is part of a broader effort to stop the spread of fake news. Earlier this year, it also released fact-checking tools for journalists to tag stories that debunk misinformation. Mozilla also has its own fake news-fighting effort.

And if the social media platforms and other internet giants can’t work this out, and if all else fails, at least we have mice.


from Naked Security https://ift.tt/2n7zCd0

0 comments:

Post a Comment