How AI Is Used to Scam You (and What You Can Do About It)

By | 10:13 AM Leave a Comment

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

There’s a lot of hype around AI, and just as much fear. But anxiety over pie-in-the-sky scenarios involving sentient robots and supercomputers obfuscates the real threats that already exist, like AI-assisted scams.

AI introduces new scams, and enhances old ones

Scammers, hackers, and other malicious actors use AI in many ways, but the ultimate goal is usually the same as other online schemes—to get you to click on fake links or download malware that can steal your personal data, take over your accounts and devices, or spy on you. While the goals are the same as the phishing and malware scams we’ve seen for decades, AI tools can make the job easier by creating more enticing—or threatening—reasons to click those malicious links.

A growing threat is training an AI tool to replicate the likeness of friends or family members using voice recordings, then using those fake voice clips to dupe a victim into thinking their loved one needs them to send money or grant them access to important accounts. Similarly, hackers can program a chatbot using public information and social media posts to send personalized messages or emails claiming to be someone it’s not. In extreme cases, scammers use these tacts to scare people into thinking someone they know has been kidnapped and held for ransom.

Another tactic is using AI-generated content—like fake political articles and social media posts—to rile someone up and get them to click on a dangerous link, or blackmailing victims with deepfake pornography. There are even fake AI-written job listings out there.

The technology is already sophisticated enough to trick people, especially those not looking close enough, and AI-generated content will only become more believable as the technology develops. Unfortunately, no current laws or regulations exist preventing or penalizing the creation and distribution of deepfakes, AI-made misinformation, or the tools used to make them.

How to protect yourself from AI scams

That means the onus is on the general public to keep themselves safe. While this can be difficult—it only takes a few seconds of a voice recording to train an AI—there are still ways to spot and prevent these scams.

If you ever receive a call, email, or text message from someone claiming to be a person you know in danger, you should immediately reach out to the person through the phone number or email address saved in your contacts to confirm that really was them that called. Do not do anything until you know for sure.

As for AI-generated images or video, the best thing to do is scan the image for inconsistencies. AI images might be convincing at a glance, but will reveal errors upon closer inspection, like extra fingers, missing body parts, or incorrect proportions, to name a few. AI videos will have similar issues, and motion may look jittery, glitchy, or warped.

Another strategy is to perform a reverse image search: Drop the image into Google to see if that image, or others like it, exist and are credited to legit photographers, artists, or publications. Some AI art generators post all images made with their tools online, so you can see if the image came from a source like Midjourney.

Otherwise, the methods for avoiding AI-assisted scams are no different from preventing common phishing and malware attacks:

  • Don’t take calls from unknown phone numbers.
  • Don’t click on suspicious links.
  • Double-check messages are coming from a legitimate source.
  • Don’t log into random websites with your social media, Google, or Apple accounts.
  • Don’t give your personal data or login information to anyone over the phone or online, even if they claim to be an official representative of a company, bank, or social media website.
  • Make unique passwords for every account you use.
  • Always report any suspected scams.

This isn’t a comprehensive list, but following these strategies can help keep you safe. Be sure to check our guides on avoiding online scams and other internet privacy strategies for more tips.

AI grifters and false advertising

While scammers using AI is a major concern, there’s also a second, broader category of AI scams, one that I alluded to in the intro: AI hype.

Like with any exciting new technology, companies are eager to jump on the AI hype train. We’ve already seen an increase in “AI-powered” products and features, like the AI tools added to Bing and Google search. But that boom in interest is also attracting grifters that will use AI—or our fears of it—to sell you bullshit. The same thing happened with cryptocurrency and NFTs in recent years, and now it’s happening with AI. Don’t get suckered.

One of the best ways to safeguard yourself is to learn how AI works, and what it is—and is not—capable of. The term “AI,” meaning artificial intelligence, isn’t actually what these AI tools actually are, anyway. That would imply these are sentient beings capable of thinking and reasoning. They’re not, and anyone implying their products—or anyone else’s—are somehow “alive” are misguided or lying to you.

That said, plenty of grifters will use more mundane and realistic claims to sell you on whatever AI-adjacent schemes they’re running—like claims you could make tons of money as a freelance writer, coder, or graphic designer using their AI tools. Not only is passing off AI-generated content as your own unethical, but it’s also unwise.

Tools like ChatGPT or Midjourney only work because real humans made the text or art they reference, and often without consent or compensation. There’s no mind in these AI tools, and therefore no experience, memory, or skill informing its output. In other words, they plagiarize, and often poorly, so there’s no way to guarantee an AI-generated article is true unless someone else edits it.

That’s not to say these tools aren’t impressive or that they’re entirely useless. The point is, AI can do a lot of things, but it’s not magic, and it’s not perfect. The next time you hear a claim about a new AI product that sounds too good to be true, chances are it is. Don’t fall for it.


from Lifehacker https://ift.tt/Ls0JqYp

0 comments:

Post a Comment