The Latest

A wave of layoffs has swept through the tech industry, leaving IT teams in a rush to revoke all access those employees may have had.

Additionally, 54% of tech hiring managers say their companies are likely to conduct layoffs within the next year, and 45% say employees whose roles can be replaced by AI are most likely to be let go, according to General Assembly.

layoffs cybersecurity risks

Taking away access to company data the moment someone leaves might seem harsh, but it’s an important step to protect against security risks.

Not everyone leaves on good terms. For example, a 39-year-old man accessed his former company’s computer testing systems and deleted 180 virtual servers.

Key risks during layoffs

Insider threats: An offboarding employee, whether intentionally or unintentionally, can take sensitive data with them. If accounts are not properly deactivated or access is not revoked, they might log in, steal data, or cause damage. IBM found that 83% of organizations reported insider attacks in 2024.

The types of data that can be extracted:

  • Client/customer data
  • Company confidential
  • Employee HR data
  • Financial data
  • Sensitive project files
  • Source code
  • Unreleased or sensitive marketing

Lack of monitoring during workforce transitions: During large-scale layoffs, teams often cannot cover all aspects of offboarding alongside their regular duties. Employees use devices like laptops, phones, and USB drives, as well as platforms such as email and collaboration tools like Slack or Teams. Managing all of this, especially in a hybrid work environment, can be challenging. As a result, important steps can be missed, and unusual activity might not get noticed.

Threat actors are watching: Layoff news gets around fast, and cybercriminals pay attention. They use it to launch phishing and social engineering attacks, taking advantage of how off guard people can be during times like these.

The question we need to ask ourselves is: what can we do to minimize all these risks?

Mitigation strategies for safer offboarding

Revoke access to user accounts, systems, applications, and networks.

Collect all devices such as laptops, phones, and tablets, and erase all data from them.

Check for any shared passwords or special access. Remove them. Update who has access to what.

Hand off files, projects, or documents to appropriate team members.

Store anything needed for legal or audit reasons in a safe place.

Conduct an exit interview with the person before they leave. Get feedback and check for any loose ends. Remind them to keep company info private, even after they leave.

The role of leadership and HR

It’s very important for IT, HR, and legal teams to work closely together to ensure the process goes smoothly. Each has its own expertise, and together they can better identify risks and keep everything on track.

The way an employee leaves is very important. Be transparent with both the people leaving the company and those who stay. This helps everyone understand the reasons behind the departure and the steps the company is taking next. It can also stop rumors and unverified information that could lead to further distrust and the risk of people making security mistakes out of fear.

Establishing policies that balance security with empathy sets the right tone. Rules should be strong enough to keep data safe but not so strict that they feel unfair. People are more likely to follow them when they feel they’re being treated fairly.


from Help Net Security https://ift.tt/rawspZY

In this Help Net Security video, Stefan Tanase, Cyber Intelligence Expert at CSIS, gives an overview of how cybercriminals are changing their tactics, including using legitimate tools to avoid detection and developing more advanced info-stealing malware. Tanase also talks about new social engineering tricks like fake CAPTCHAs, changes in ransomware patterns, and the rise of mobile phishing attacks.

The post Cyber threats are changing and here’s what you should watch for appeared first on Help Net Security.


from Help Net Security https://ift.tt/TJCtzIi

Hybrid cloud infrastructure is under mounting strain from the growing influence of AI, according to Gigamon.

AI hybrid cloud infrastructure

Cyberthreats grow in scale and sophistication

As cyberthreats increase in both scale and sophistication, breach rates have surged to 55% during the past year, representing a 17% year-on-year rise, with AI-generated attacks emerging as a key driver of this growth.

Security and IT teams are being pushed to a breaking point, with the economic cost of cybercrime now estimated at $3 trillion worldwide according to the World Economic Forum. As AI-enabled adversaries grow more agile, organizations are challenged with ineffective and inefficient tools, fragmented cloud environments, and limited intelligence.

46% of security and IT leaders say managing AI-generated threats is now their top security priority. One in three organizations report that network data volumes have more than doubled in the past two years due to AI workloads, while 47% of all respondents are seeing a rise in attacks targeting their organization’s LLM deployments. 58% say they’ve seen a surge in AI-powered ransomware—up from 41% in 2024.

91% concede to making compromises in securing and managing their hybrid cloud infrastructure. The key challenges that create these compromises include the lack of clean, high-quality data to support secure AI workload deployment (46%) and lack of comprehensive insight and visibility across their environments, including lateral movement in East-West traffic (47%).

Public cloud risks prompt industry recalibration

Once considered an acceptable risk in the rush to scale post-COVID operations, the public cloud is now coming under intense scrutiny. Many organizations are rethinking their cloud strategies in the face of their growing exposure, with 70% of security and IT leaders now viewing the public cloud as a greater risk than any other environment.

As a result, 70% report their organization is actively considering repatriating data from public to private cloud due to security concerns and 54% are reluctant to use AI in public cloud environments, citing fears around intellectual property protection.

As cyberattacks become more sophisticated, the limitations of existing security tools are coming sharply into focus. Organizations are shifting their priorities toward gaining complete visibility into their environments, a capability now seen as crucial for effective threat detection and response.

55% of respondents lack confidence in their current tools’ ability to detect breaches, citing limited visibility as the core issue. As a result, 64% say their number one focus for the next 12 months is achieving real-time threat monitoring delivered through having complete visibility into all data in motion.

Deep observability becomes the new standard

With AI driving traffic volumes, risk, and complexity, 89% of security and IT leaders cite deep observability as fundamental to securing and managing hybrid cloud infrastructure. Executive leadership is taking notice, as boards increasingly prioritize complete visibility into all data in motion, with 83% confirming that deep observability is now being discussed at the board level to better protect hybrid cloud environments.

“This year’s survey signals a profound shift in risk management priorities, and the time has come to recalibrate how hybrid cloud infrastructure is secured and managed in the AI era,” said Chaim Mazal, chief security officer at Gigamon. “Deep observability provides that recalibration by combining traditional log data with network-derived telemetry, giving security teams the clarity to see through encrypted traffic, detect AI-powered threats, and strengthen defenses before the blast radius expands. With 88% of security and IT leaders recognizing its importance for securing AI deployments, this approach has become foundational to modern cybersecurity.”

“With nearly half of organizations saying attackers are already targeting their large language models, AI security can’t be an afterthought, it needs to be a top priority,” said Mark Walmsley, CISO at Freshfields. “The key to staying ahead? Visibility. When we can clearly see what’s happening across AI systems and data flows, we can cut through the noise and manage risk more effectively. Deep observability helps us spot vulnerabilities early and put the right protections in place before issues arise.”

The study surveyed over 1,000 global Security and IT leaders across Australia, France, Germany, Singapore, the UK, and the United States.


from Help Net Security https://ift.tt/kQLm2vD

Week in review

Here’s an overview of some of last week’s most interesting news, articles, interviews and videos:

Trojanized KeePass opens doors for ransomware attackers
A suspected initial access broker has been leveraging trojanized versions of the open-source KeePass password manager to set the stage for ransomware attacks, WithSecure researchers have discovered.

AI hallucinations and their risk to cybersecurity operations
AI systems can sometimes produce outputs that are incorrect or misleading, a phenomenon known as hallucinations. These errors can range from minor inaccuracies to misrepresentations that can misguide decision-making processes.

Closing security gaps in multi-cloud and SaaS environments
In this Help Net Security interview, Kunal Modasiya, SVP, Product Management, GTM, and Growth at Qualys, discusses recent Qualys research on the state of cloud and SaaS security.

DanaBot botnet disrupted, QakBot leader indicted
Operation Endgame, mounted by law enforcement and judicial authorities from the US, Canada and the EU, continues to deliver positive results by disrupting the DanaBot botnet and indicting the leaders of both the DanaBot and Qakbot Malware-as-a-Service operations.

Unpatched Windows Server vulnerability allows full domain compromise
A privilege escalation vulnerability in Windows Server 2025 can be used by attackers to compromise any user in Active Directory (AD), including Domain Admins.

TikTok videos + ClickFix tactic = Malware infection
Malware peddlers are using TikTok videos and the ClickFix tactic to trick users into installing infostealer malware on their computers, Trend Micro researchers have warned.

Is privacy becoming a luxury? A candid look at consumer data use
In this Help Net Security interview, Dr. Joy Wu, Assistant Professor, UBC Sauder School of Business, discusses the psychological and societal impacts of data monetization, why current privacy disclosures often fall short, and what it will take to create a more equitable data ecosystem.

Signal blocks Microsoft Recall from screenshotting conversations
Signal has released a new version of its end-to-end encrypted communication app for Windows that prevents Microsoft Recall and users from screenshotting text-based conversations happening in the app.

The hidden gaps in your asset inventory, and how to close them
In this Help Net Security interview, Tim Grieveson, CSO at ThingsRecon, breaks down the first steps security teams should take to regain visibility, the most common blind spots in asset discovery, and why context should drive risk prioritization.

Lumma Stealer Malware-as-a-Service operation disrupted
A coordinated action by US, European and Japanese authorities and tech companies like Microsoft and Cloudflare has disrupted the infrastructure behind Lumma Stealer, the most significant infostealer threat at the moment.

What good threat intelligence looks like in practice
In this Help Net Security interview, Anuj Goel, CEO of Cyware, discusses how threat intelligence is no longer a nice to have, it’s a core cyber defense requirement.

Data-stealing VS Code extensions removed from official Marketplace
Developers who specialize in writing smart (primarily Ethereum) contracts using the Solidity programming language have been targeted via malicious VS Code extensions that install malware that steals cryptocurrency wallet credentials.

Why legal must lead on AI governance before it’s too late
In this Help Net Security interview, Brooke Johnson, Chief Legal Counsel and SVP of HR and Security, Ivanti, explores the legal responsibilities in AI governance, highlighting how cross-functional collaboration enables safe, ethical AI use while mitigating risk and ensuring compliance.

Flawed WordPress theme may allow admin account takeover on 22,000+ sites (CVE-2025-4322)
A critical vulnerability (CVE-2025-4322) in Motors, a WordPress theme popular with car/motor dealerships and rental services, can be easily exploited by unauthenticated attackers to take over admin accounts and gain full control over target WP-based sites.

Why EU encryption policy needs technical and civil society input
In this Help Net Security interview, Bart Preneel, Full Professor at University of Leuven, unpacks the European Commission’s encryption agenda, urging a balanced, technically informed approach to lawful access that safeguards privacy, security, and fundamental rights across the EU.

Malicious RVTools installer found on official site, researcher warns
The official site for RVTools has apparently been hacked to serve a compromised installer for the popular utility, a security researcher has warned.

Review: CompTIA Network+ Study Guide, 6th Edition
CompTIA Network+ Study Guide is more than just an exam-prep book, it’s a detailed, structured roadmap for professionals seeking foundational networking knowledge that’s both vendor-neutral and certification-aligned.

Third-party cyber risks and what you can do
In this Help Net Security video, Mike Toole, Director of Security and IT at Blumira, explores why visibility into your vendor ecosystem is essential: from understanding which vendors you use and what data they access, to how they protect it. Learn how to build third-party scenarios into your incident response plan and keep access permissions in check.

Containers are just processes: The illusion of namespace security
Security boundaries have been in major flux ever since the evolution from single servers to clusters of Linux machines sharing workloads. Kubernetes has become the de facto cloud operating model, and modern approaches to platform engineering are patterned around application instances sharing underlying infrastructure (aka, “multi-tenancy”).

Inside MITRE ATT&CK v17: Smarter defenses, sharper threat intel
In this Help Net Security video, Adam Pennington, MITRE ATT&CK Lead, breaks down what’s new in the ATT&CK v17 release.

AutoPatchBench: Meta’s new way to test AI bug fixing tools
AutoPatchBench is a new benchmark that tests how well AI tools can fix code bugs.

Hanko: Open-source authentication and user management
Hanko is an open-source, API-first authentication solution purpose-built for the passwordless era.

AI voice hijacking: How well can you trust your ears?
How sure are you that you can recognize an AI-cloned voice? If you think you’re completely certain, you might be wrong.

Nation-state APTs ramp up attacks on Ukraine and the EU
Russian APT groups intensified attacks against Ukraine and the EU, exploiting zero-day vulnerabilities and deploying wipers, according to ESET.

Be careful what you share with GenAI tools at work
We use GenAI at work to make tasks easier, but are we aware of the risks?

Outsourcing cybersecurity: How SMBs can make smart moves
Outsourcing cybersecurity can be a practical and affordable option. It allows small businesses to get the protection they need without straining their budgets, freeing up time and resources to focus on core operations.

Cybersecurity jobs available right now: May 20, 2025
We’ve scoured the market to bring you a selection of roles that span various skill levels within the cybersecurity field. Check out this weekly selection of cybersecurity jobs available right now.

CTM360 maps out real-time phishing infrastructure targeting corporate banking worldwide
A phishing operation that targets corporate banking accounts across the globe has been analyzed in a new report by CTM360. The campaign uses fake Google ads, advanced filtering techniques, to steal sensitive login credentials and bypass MFA.

Product showcase: Secure digital and physical access with the Swissbit iShield Key 2
The Swissbit iShield Key 2 uniquely combines phishing-resistant digital authentication with physical access control. It enables enterprises and public authorities to secure operating systems, online services, and restricted physical areas using just one device – simplifying management and enhancing security across the board.

New infosec products of the week: May 23, 2025
Here’s a look at the most interesting products from the past week, featuring releases from Anchore, Cyble, Outpost24, and ThreatMark.


from Help Net Security https://ift.tt/HnMi371

I don't mean to be alarmist, but I do think it's time to start assuming everything you see online is fake.

The internet is full of content produced by real people, of course (this article included). But AI-generated media is getting so realistic, that it almost puts you at a disadvantage to presume the content you're scrolling past on your feeds is legitimate.

Don't skip this article because you know what AI content looks like—the current stuff your algorithm delivers to your social media feeds is easy to spot if you know what you're looking for. But even if you can identify AI slop the second it hits your eyeballs, you need to know you're not ready for the next wave of AI-generated videos. That wave isn't just on its way—it's already here.

AI content is already fooling people

Most of us are acutely aware of the "AI video" look: This "tragic" video of a cat parent saving their kitten by throwing it out of a burning airplane is obvious AI slop to most who watch it. You probably know Trump isn't working this construction site, and you most assuredly can understand this family of cat farmers is, in fact, AI-generated.

But there are the videos that aren't so obvious, especially to those of us not quite so in tune with AI, or technology in general. You might know this video of babies dancing in a circle is AI, but plenty of the people in the comments didn't (assuming they aren't bots, either). You might also be able to tell that this family of pets isn't really watching a bird investigate a toy alligator, but, again, plenty can't. And there is no end to the America's Got Talent videos that feature "realistic" yet impossible visuals—that still capture the hearts of hundreds of thousands, if not millions of people. (I weep.)

But I'm not writing this piece today because I'm concerned about how many of these "believable" AI videos are tricking way too many people into thinking they're real. I am worried about that, but those worries pale in comparison to my new fears.

So far, most of the AI videos taking over social media feeds rely mainly on their visuals and background sounds to sell their alleged authenticity. You'll notice none of the characters in any of these videos actually speak. If they do, it's immediately off-putting, with out of sync lip movements and, typically, robotic voices. It's been easier for AI creators to put the emphasis on the realism of the people and animals in their videos, and hope you're wowed enough by a baby dancing with a lion to not think, "this is bullshit, right?"

Even OpenAI's Sora video model, which shocked me with its quality in February of last year, was working off of its realistic visuals. A video of woman "filming" her reflection through a train window too real for comfort, but Sora wasn't spitting out fully-rendered conversations. If you see such a scene on your feeds, you probably assume, of course, it's a real video—or at least one generated by humans.

AI video is about to change completely

Something happened this week that only made me more pessimistic about the future of truth on the internet. During this week's Google I/O event, Google unveiled Veo 3, its latest AI video model. Like other competitive models out there, Veo 3 can generate highly realistic sequences, which Google showed off throughout the presentation. Sure, not great, but also, nothing really new there.

But Veo 3 isn't just capable of generating video that might trick your eye into thinking its real: Veo 3 can also generate audio to go alongside the video. That includes sound effects, but also dialogue—lip-synced dialogue.

In order to demonstrate Veo 3's audio/video capabilities, Google showed off a clip of an old sailor at sea. The video quality is sharp and realistic, and the words the man speaks are synced to his lip movements. Of course, knowing the video is AI, you notice quirks that give away the game (to my eye, this looks like a high quality animation more than a live action shot) but I am quite confident this video would fool a lot of fans of fake AGT videos.

But even this clip wasn't what inspired my newfound fears—it was the videos that users started making once they got their hands on Veo 3. PetaPixel has a great roundup of some of the "best" Veo 3 videos people have made so far, but I'll highlight some of the ones that should scare you most.

This clip shows a streamer playing Fortnite. Everything, including the game footage, was generated with Google's AI:

This clip shows three concerts that never happened, featuring musicians and crowds that do not exist. The music isn't good, but that's not the point. The music, from the vocals to the instrumentals, was generated entirely by the AI, and then synced to lips, drums, guitars, and strings:

But this clip is, without a doubt, the one that should sound the alarm for each and every one of us. Someone generated a fake video of a fake car show, featuring fake interviews with fake attendees. It's far from perfect, but any AI quirks are totally overshadowed by the surface-level realism here. Not only would the AI's Got Talent fans buy this, I would buy this, especially if I wasn't looking out for it:

It's the visuals; it's the dialogue; it's the crowds; it's the lighting; it's the candid laughter at "mistakes;" it's the sound of the mic being "bumped" into. Congratulations on noticing the dialogue often doesn't make sense, or that the people in the background defy the laws of physics—you won't notice it when it hits mid-scroll on TikTok or Instagram.

Even Veo 2, which isn't as powerful as Veo 3, now offers tools for realism, like the ability to dictate how you want the camera to move. And both models are available in Flow, Google's AI video editor of sorts. Creators now have the ability to generate highly realistic AI content that feels like it was filmed in-person, and the tech is only getting better.

Google's best AI video generator tools cost $250 a month through its new AI Ultra subscription plan. That's expensive, but not out of reach for plenty of people interested in making AI-generated content. But the $20 per month plan, AI Pro, still comes with Veo 2 and Flow access. The rate limits are lower, but I wouldn't be shocked to see some realistic slop come out of those limitations, too.

It's time to be a full-time skeptic

None of this tech is perfect. I'm not here to tell you that everything Veo 3 spits out is indistinguishable from real content, or that the videos are absent any of the usual AI tells. In fact, there's clearly something up with Veo 3's training data: As 404 Media reports, the model continuously generates the same weird "dad joke" whenever you ask for a generation of a comedian performing standup.

What I'm saying is, it's time to turn on your bullshit detectors and keep them active full time. When engaging with videos on the internet—especially short-form algorithmic clips—you might be safer operating under the assumption the content is fake from the jump, and require proof beyond a reasonable doubt that what you're seeing wasn't generated with a simple prompt and a $250 budget. That feels extreme, but after what I've seen this week, I don't really see another way to engage with this content going forward.

We're in scary territory now. Today, it's demos of musicians and streamers. Tomorrow, it's a politician saying something they didn't; a suspect committing the crime they're accused of; a "reporter" feeding you lies through the "news."

I hope this is as good as the technology gets. I hope AI companies run out of training data to improve their models, and that governments take some action to regulate this technology. But seeing as the Republicans in the United States passed a bill that included a ban on state-enforced AI regulations for ten years, I'm pretty pessimistic on that latter point.

In all likelihood, this tech is going to get better, with zero guardrails to ensure it advances safely. I'm left wondering how many of those politicians who voted yes on that bill watched an AI-generated video on their phone this week and thought nothing of it.


from Lifehacker https://ift.tt/DQuXVAE

CVE-2025-4427 and CVE-2025-4428 – the two Ivanti Endpoint Manager Mobile (EPMM) vulnerabilities that have been exploited in the wild as zero-days and patched by Ivanti last week – are being leveraged by a Chinese cyber espionage group that has been exploiting zero-days in edge network appliances since at least 2023, EcleticIQ researchers have shared.

Among the entities targeted in this campaign were:

  • a local government authority and healthcare organizations in the UK;
  • a research institute, a legal firm, a telco and a manufacturer in Germany;
  • an aerospace leasing company in Ireland;
  • a healthcare provider, a medical device manufacturer, a firearms manufacturer, and even a cybersecurity firm specializing in mobile threat defense and enterprise device security in the US;
  • a multinational bank operating in South Korea;
  • a Japanese automotive parts supplier.

The attack campaign

By chaining together the two vulnerabilities, the attackers could achieve remote code execution on internet-exposed Ivanti EPMM deployments without having to authenticate themselves first.

They set up a reverse shell on the compromised systems, deployed KrustyLoader malware downloaded from publicly accessible Amazon AWS S3 buckets, the Sliver backdoor/implant, and an open-source reverse proxy tool.

They also managed to extract data from the Ivanti EPMM databases: data related to the managed mobile devices (IMEI, phone numbers, location, etc.), LDAP users, and Office 365 refresh and access tokens.

EclecticIQ does not mention whether the compromised instances were deployed by the organizations on-premises or in their cloud environment, but judging by some overlapping indicators of compromise, Wiz researchers have spotted the same activities by the same Chinese threat actor, which is tracked as UNC5221.

“We can confirm that the incident we found was on cloud hosted virtual appliances and not an on-prem device. This doesn’t mean that the attacker explicitly targeted cloud environments – from an outside network perspective it is hard to differentiate the two deployment options – but it does mean that both cloud and on-prem customers are at risk,” Gili Tikochinski, researcher at Wiz, told Help Net Security.

EclecticIQ researchers say that UNC5221 demonstrated a deep understanding of EPMM’s internal architecture by repurposing legitimate system components for data exfiltration.

“Given EPMM’s role in managing and pushing configurations to enterprise mobile devices, a successful exploitation could allow threat actors to remotely access, manipulate, or compromise thousands of managed devices across an organization,” they added.

Also, one of the IP addresses associated with these attacks points to UNC5221 also being the ones that exploited vulnerable SAP NetWeaver installations earlier this month.

Patch and search for evidence of compromise

Organizations using Ivanti EPMM should upgrade their instances to one of the following fixed versions: 11.12.0.5, 12.3.0.2, 12.4.0.2, or 12.5.0.1.

The company also pointed out that if they apply the patch see a 400 response in their logs, it does not indicate exploitation.

They did not share any indicators of compromise, but both Wiz and EclecticIQ have, so organizations can look for them.

Subscribe to our breaking news e-mail alert to never miss out on the latest breaches, vulnerabilities and cybersecurity threats. Subscribe here!


from Help Net Security https://ift.tt/YIAgoKO

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

If you want to begin coding—either for a career shift, a hobby, or just to better understand your favorite apps—this bundle is worth a look. You can get Microsoft Visual Studio Professional 2022 bundled with the Learn to Code Certification course pack currently on sale for $55.97—a steep markdown from its original $1,999 price tag. The standout here is that you’re not just getting a license for Visual Studio, which is already a powerhouse development environment, but also lifetime access to more than 60 hours of beginner-to-advanced coding instruction.

Visual Studio Pro 2022 itself is widely used in professional environments. It’s built for serious dev work—think C#, Python, .NET, and more. You get tools like IntelliCode for AI-assisted coding suggestions, built-in Git support, debugging, and advanced testing features useful whether you're building a small app or working on enterprise-level projects. Normally, licenses for just the IDE can get expensive, so getting it bundled with a full curriculum covering JavaScript, React, Python, CSS, and more is the kind of package that feels designed for someone starting out with ambition or a side hustle in mind.

That said, this is a very Microsoft-centric deal. If you’re aiming to become a Mac-based developer or work heavily with Xcode and iOS projects, this probably won’t be your best fit. The same goes if you’re already deep into development and just looking for a few advanced certifications—some of the coursework may feel too foundational. But if you’ve been stuck in tutorial limbo, this bundle provides structure, software, and real-world tools at a reasonable price. For under $60, it’s hard to beat the value, especially for learners who want both a development platform and a guided roadmap in one place.


from Lifehacker https://ift.tt/YJOaH85