What People Are Getting Wrong This Week: The Danger of Deepfakes

By | 9:14 AM Leave a Comment

Even before the term “deepfake” was coined in 2014, the fear that digitally faked audio, photos, and video would be used by nefarious people to manipulate public opinion was common. Now that any kind of fake can be instantly created by AI with almost no effort, the specter of villains using doctored evidence to steer history, shape public opinion, and destroy the notion of truth itself is widely regarded by as inevitable. But it's not going to happen (probably).

You can never say never, but in the last decade, there haven’t been any major examples of deepfake images or videos successfully manipulating public opinion on a large scale. Even though fakes are comically easy to produce and spread and people are clearly invested in shaping what we think, deepfakes haven't changed many minds.

There’s good reason to think the worst will never happen, too—not because the better angels of our nature will take over or some well-meaning legislation will be passed, but because evidence, doctored or real, doesn’t drive people’s opinions and beliefs. 

A "post-truth" world is nothing new

Manipulating images and sounds and spreading them all over the world is nothing new. Photographers have been altering photos since the invention of the camera. Motion pictures have been staged, or edited to cloud or change what they portray. Audio editing is similarly easy and powerful. And people have been widely disseminating lies through the written word since the printing press was invented in 1440. 

None of these older technologies resulted in a post-truth culture in which no one believes their eyes and ears. There have been a few minor outlier cases—the panic over Orson Welles’ War of the Worlds radio broadcast, for instance—but for the most part, faked images, audio or films haven’t been widely accepted as true because most people are fairly good at putting things into context and using common sense to discern the authenticity of what they see and hear. Even when something does fool people, like that faked image of the Pope in a puffer jacket, once it's debunked, everyone gets off the bus. 

We’re using the same truth-finding tools with deepfakes as we used with photographs before: common sense and context. it’s possible to create a pornographic video of Taylor Swift, but no matter how good it looks, no one would be fooled. It doesn't make sense that Swift would star in that kind of video, and even if she had, it would have been widely reported. No one actually thinks our ex-presidents have sleepovers and play Black Ops either, even if it’s on YouTube. 

Dangerous deepfakes barely even exist

As Walter Scheirer points out in his book A History of Fake Things on the Internet, it's so hard for academic researchers to find convincing deepfakes, they have to create their own in order to study them. There are plenty of fake photos and videos on the Internet of course, but they’re not deepfakes in the sense that they’re designed to fool anyone. They’re almost all memes, jokes, or porn, created to make people laugh or masturbate, not to make them think or change their minds. Mainly because it wouldn't work.

Deep stories are more powerful than deepfakes

Fake evidence doesn’t change anyone’s mind. But neither does actual evidence. People form their opinions based on emotions, not facts or photo-manipulation. About 15% of Americans believe the US is controlled by a group of Satan-worshipping pedophiles who run a global child sex trafficking ring. Not because someone created an AI image of Joe Biden presiding over a black mass, but because it’s a titillating story that reinforces a pre-existing bias. This is a “deep story” as opposed to a deepfake, and deep stories are almost impossible to combat.

The best fake narratives have always been simple and so broad that no single piece of evidence could prove or disprove them. Things like “the election was stolen” or “911 was an inside job” stick with people. The really good ones last a long time, too. As Daniel Immerwahr points out in The New Yorker, people still think Catherine the Great had sex with a horse.

Realistic-looking evidence can actually makes conspiracy theories less believable. There’s a widespread belief in extremely right wing circles that a video of Hilary Clinton murdering a child as part of a Satanic ritual exists. The evidence consists of “trust me, bro” descriptions of the footage on message boards and breathless speculation about when it will be released. But there’s not a faked version. No matter how “realistic” the AI footage might look, seeing this scene represented visually would likely destroy the illusion by highlighting how absurd it is at its core. You’d be able to pick it apart, and be forced to consider the details of how something like this would happen on planet earth instead of the darkest corners of your mind.


from LifeHacker https://ift.tt/tKbd9By

0 comments:

Post a Comment