Malicious actors’ GenAI use has yet to match the hype

By | 9:12 AM Leave a Comment

Generative AI has helped lower the barrier for entry for malicious actors and has made them more efficient, i.e., quicker at creating convincing deepfakes, mounting phishing campaigns and investment scams, the most recent report by the Cyber Threat Alliance (CTA) has concluded.

For now, though, it hasn’t made the attackers “smarter” nor completely transformed cyber threats.

How malicious actors use GenAI

The report, which is based on data and evidence-based case studies available to CTA members, explains what attackers are currently using GenAI for:

  • Creating deepfaked videos and deceptive images, cloning voice recordings based on specific voices, creating highly convincing emails, messages, and website content
  • Assistance in creating malware (but not creating malware without additional human input and tweaking)
  • Optimizing command and control operations (e.g., for managing botnets)
  • Spreading misinformation and disinformation (e.g., to further online conspiracies, to affect election campaigns)
  • Creating AI-controlled networks of inauthentic social media accounts (bot farms)

“While AI innovations are undeniably powerful, they have so far resulted in incremental improvements in adversary capabilities, but they have not created entirely new threats,” the analysts noted.

“Defending against AI-enhanced threats may be harder than defending against standard threats, but it does not require revolutionary tools or techniques.”

Thwarting AI-enhanced threats

So far, foundational cybersecurity practices – regular software updates, multi-factor authentication, offline data backups, endpoint monitoring, behavioral analytics, and so on – remain essential to countering all threats, even those augmented by AI.

But fighting the latter will also require a combination of technical solutions (such as deepfake detectors) and, even more importantly, constant education, the acquirement of specific skills (e.g., knowing how to fact check, do reverse image searches, verify metadata, etc.), and the encouragement of critical thinking.

“Organizations should emphasize training that prioritizes content-based analysis and fosters a culture of healthy skepticism. Employees should be encouraged to ask questions like, ‘Did I initiate this request’ or ‘Is this communication consistent with prior exchanges?’,” noted Chelsea Conard, an analyst with the CTA.

“Other defenses can rely on both technical and process-based measures. Technical tools can analyze content for manipulation, such as audio mismatches. When these tools fall short, organizations can rely on process-based measures, including multi-channel verification, dual approvals or pre-arranged authentication phrases, to provide critical safeguards.”

The good news is that adversaries’ use of GenAI has not yet matched the hype and, if they act fast, organizations and institutions can put in place defenses that will stymie most AI-enhanced threats.


from Help Net Security https://ift.tt/D1rz7pa

0 comments:

Post a Comment