The Latest

RSAC 2026 Conference is taking place at the Moscone Center in San Francisco March 23 – 26. With hundreds of booths, countless product demos, and nonstop buzz, navigating RSAC can be overwhelming. That’s why we’ve done the legwork to highlight the standout companies you won’t want to miss.

Whether you’re looking for cutting-edge innovation, industry veterans with new offerings, or rising stars shaking things up, these exhibitors are bringing something special to the floor this year. Be sure to carve out time in your schedule to stop by, as you might just discover your next big opportunity.

Booth S-3316 | Book a demo

RSAC 2026 companies

Apiiro is an application security company with offices in New York and Tel Aviv. Its agentic Application Security Posture Management (ASPM) platform helps security and development teams detect, prioritize, and fix risks across the software development lifecycle, from design through code to deployment. Powered by patented Deep Code Analysis technology, the platform provides code-to-runtime context, automated threat modeling, and AI-driven remediation. Apiiro has raised over $135 million in funding from investors including General Catalyst, Kleiner Perkins, and Greylock. Gartner, IDC, and Frost and Sullivan have all recognized Apiiro as a leader in ASPM, and its customers include USAA, BlackRock, Shell, and TIAA.

Booth ESE-19

RSAC 2026 companies

Cline is an open-source AI coding agent that runs inside Visual Studio Code, JetBrains IDEs, and the command line. It goes beyond code completion by reading codebases, creating and editing files, executing terminal commands, automating browser interactions, and connecting to external tools via the Model Context Protocol, all with user approval at each step. Developers can bring their own API keys and connect to any major AI provider, including Anthropic, OpenAI, Google Gemini, AWS Bedrock, and local models. Cline has surpassed 5 million installs and nearly 60,000 GitHub stars, and is trusted by developers at companies including Samsung, Microsoft, Salesforce, Amazon, and Visa.

Booth N-5181 | Book a strategy session

RSAC 2026 companies

GlobalSign by GMO is a leading Certificate Authority and digital identity provider founded in Belgium in 1996 and now a subsidiary of Japan’s GMO Internet Group. The company issues SSL/TLS certificates, S/MIME email security certificates, code signing certificates, and document signing solutions to businesses, enterprises, cloud providers, and IoT manufacturers worldwide. Its Atlas platform enables automated certificate lifecycle management at scale. GlobalSign by GMO is a founding member of the CA/Browser Forum and became a Qualified Trust Service Provider under the eIDAS regulation in both the EU and the UK. With over 600 employees across more than a dozen countries, the company serves clients including Microsoft, Cisco, and Johnson and Johnson.

Booth S-2452

RSAC 2026 companies

IDEMIA is a global leader in biometrics and cryptography, providing identity and security solutions to governments and enterprises in more than 180 countries. The company operates through two main divisions: IDEMIA Secure Transactions, which delivers payment cards, eSIM connectivity, and cryptographic security including hardware security modules and post-quantum cryptographic libraries; and IDEMIA Public Security, which provides biometric solutions for border control, law enforcement, access control, and travel. IDEMIA is trusted by more than 600 governmental organizations and 2,400 enterprises.

Booth N-5245 | Book a demo

RSAC 2026 companies

Mimecast is a leading cybersecurity company focused on managing and mitigating human risk for organizations worldwide. Its AI-powered, API-enabled platform is built to protect businesses from a broad spectrum of cyber threats by integrating advanced technology with human-centric security pathways. The platform enhances visibility, delivers strategic insight, and enables decisive action to safeguard critical data and collaborative environments. It also actively engages employees in reducing risk and improving productivity.

Booth ESE-28

RSAC 2026 companies

MyCISO is a SaaS cybersecurity platform designed to help organizations assess, improve, and manage their security posture without relying on spreadsheets or fragmented tools. Its Security Operating System centralizes assessments, risk management, compliance, supplier security, incident response, security awareness, and board-ready reporting into a single platform supporting over 65 frameworks. AI-powered insights and automated external and internal vulnerability scans help security leaders prioritize risks, track maturity over time, and demonstrate progress to executives.

Booth S-0262 | Book a demo

RSAC 2026 companies

Novee is an AI-powered penetration testing platform that continuously simulates real-world cyberattacks to help organizations find and fix vulnerabilities before hackers do. Unlike traditional annual pentests or generic scanners, Novee deploys a hive-mind of AI agents trained on offensive security tradecraft to map environments, uncover exploit chains, and identify business logic flaws. It can begin with zero knowledge, mirroring how real attackers operate, then expand into deeper coverage. For every issue discovered, Novee validates the finding and delivers personalized, step-by-step remediation guidance.

Booth S-3111 | Book a demo

RSAC 2026 companies

Teleport is an infrastructure identity company that provides a unified platform for securing access across classic and AI infrastructure. Its platform consolidates identity for humans, machines, workloads, and AI agents using cryptographic identity and short-lived certificates, eliminating static credentials and standing privileges. Key capabilities include zero trust access, identity governance, privileged access management, machine and workload identity, and security for agentic AI and Model Context Protocol tooling. Teleport has raised over $169 million in funding, and is valued at $1.1 billion. Customers include Nasdaq, DoorDash, Accenture, Discord, and GitLab.

Booth ESE-09

RSAC 2026 companies

Unbound AI is a cybersecurity company that created the Agent Access Security Broker (AASB) category, a governance layer purpose-built for AI coding agents. Its platform helps enterprises discover every AI coding agent in use across their organization, including tools such as Cursor, Claude Code, GitHub Copilot, and Cline, assess their risk, and enforce granular policies over terminal commands, MCP server connections, and sensitive data flows. The platform processes over one million agent tool calls per month and deploys via MDM with no code changes required. Customers include THG Ingenuity, WeWork, Siemens, and Exterro.

Booth ESE-11

RSAC 2026 companies

12Port is a cybersecurity company providing an agentless Privileged Access Management (PAM) platform for enterprises and managed service providers. Its platform secures, monitors, and audits privileged sessions across physical, virtual, and cloud environments without requiring software agents on target endpoints. Core capabilities include a credential vault with FIPS 140-3 validated encryption, automated credential rotation, just-in-time access controls, session recording, MFA enforcement, and AI-powered session intelligence that detects policy violations and anomalies in real time. The platform supports hybrid, multi-cloud, and air-gapped environments, integrating with Active Directory, Entra ID, SSO, SIEM tools, and cloud platforms.


from Help Net Security https://ift.tt/dzW3VT0

In this Help Net Security interview, Gidi Cohen, CEO at Bonfy.AI, addresses what he sees as the most pressing gap in AI agent security: data-layer risk. While the industry focuses on prompt injection and model behavior, Cohen argues the deeper threat is autonomous AI agents operating across systems with no visibility into what data they access, combine, or expose.

He explains how Bonfy.AI approaches this through three areas: controlling what data agents can access for grounding, monitoring content as it moves through tool calls and MCP servers, and letting agents query Bonfy in real time to check whether an action is safe before they take it. The conversation covers threat modeling, anomaly detection, multi-agent delegation, model versioning, and practical advice for CISOs navigating pressure to deploy AI at scale.

OPIS

When we talk about “AI agent security,” most people immediately think about prompt injection or jailbreaks. What’s the threat vector that keeps you up at night that almost nobody in the industry is preparing for?

The threat that keeps us up at night isn’t another clever jailbreak, it’s autonomous data misuse by AI agents operating across systems the enterprise doesn’t fully see, understand, or govern yet.

Most of the conversation today is still “LLM-centric,” prompt injection, jailbreaks, model behavior. But in large organizations, the real risk is shifting to the data layer of increasingly autonomous workflows: agents that can read from many systems, call tools and MCP servers, and then take actions (like send emails, update records, publish content) without a human in the loop at every step. Once you have that, any mistake in how data is accessed, combined, or shared quickly becomes a systemic exposure problem, not just a bad answer on a chat screen.

What almost nobody is prepared for is that these agents don’t live on a single endpoint or inside a neat perimeter. They run in Microsoft, Google, Salesforce, custom app frameworks, MCP-based toolchains, often as “system agents” that aren’t even tied to a specific user session. Traditional DLP, DSPM, and browser-centric controls were never designed to watch data as it flows through a multi-hop chain of LLM calls, vector stores, MCP servers, and downstream automations. So organizations end up effectively flying blind: they don’t know which sensitive content is feeding agents, which tools receive it, or where AI-generated outputs with regulated or customer-specific data land.

That’s the vector we focus on at Bonfy: protecting the organization’s data throughout the full lifecycle of AI and agents, not just protecting the model from bad prompts. Our platform applies the same contextual, entity-aware controls to humans, systems, and AI agents, across email, SaaS apps, collaboration tools, Copilot, MCP-connected agents, and custom GenAI workflows. We control what data is available for grounding, inspect what goes into prompts and tools, monitor what comes out into emails, files, and knowledge bases, and now even let agents call our MCP server during their own reasoning to ask, “Is this safe to share?” before they act.

If you assume agents will be everywhere, and that they will eventually touch your most sensitive customer, employee, and IP data, the critical security question is no longer “Can someone jailbreak the model?” but “Do we have data-layer guardrails that work at the speed and scale of autonomous AI?”

A traditional application has a relatively predictable blast radius when it’s compromised. An AI agent that can browse the web, write files, call APIs, and send emails does not. How do you even begin to threat-modeling something that dynamic?

You cannot threat-model AI agents the way you threat-model a single web app; you have to threat-model the data flows and actors around them, end-to-end.

For us, the starting point is to stop thinking “one application, one blast radius” and instead map a chain of four things:

  • What data the agent can be grounded on
  • Which tools and MCP servers it can call
  • Which humans and systems it is effectively impersonating
  • All the outbound channels where its outputs can land.

Once you can see that full multi‑hop path, you can assign risk not just to the model, but to specific agents, tools, users, and data sets.

In practice, we do three concrete things here. First, we control the grounding step with granular, contextual labeling and access control on the underlying data sources, so you can define which content is even eligible to be pulled into an agent workflow for a given business context. Second, we monitor upstream and downstream traffic, prompts, retrieved docs, emails, files, SaaS updates, across channels, so you can see when an agent’s behavior creates a confidentiality, integrity, or privacy incident in the real world. Third, we plug into the agent’s reasoning loop via our own MCP server, so agents can ask us in real time, “Is this content safe to send to this tool, this user, or this destination?” before they act.

That gives you a very different kind of threat model: instead of trying to predict every possible action of a dynamic agent, you define and enforce guardrails on the data that can flow through it, the entities it can impact, and the points where that flow must be inspected or stopped. Over time, because Bonfy tracks humans, systems, and AI agents as first‑class risk entities, you can see which agents consistently operate near dangerous trust boundaries and tighten controls there, rather than treating “AI” as one monolithic, uncontrollable blast radius.

When an agent chains together multiple tools, each tool call potentially exposes data to the next step in the chain. Is anyone auditing those intermediate states, and what does that audit look like?

Right now, almost nobody is truly auditing those intermediate states, and that’s exactly where a lot of the real risk hides.

When an agent chains tools together, each call is effectively a mini data‑sharing event: the agent is taking some slice of context, handing it to a calendar API, then to a CRM MCP server, then maybe to an email‑sending service. As Gidi put it, when that happens “each tool call potentially exposes data to the next step in the chain,” but most of today’s “agent security” focuses on configuration – what tools are allowed – not on the actual content flowing between those tools.

Our view is that you have to treat those intermediate states as first‑class audit points. That’s why we expose Bonfy as an MCP server the agent can call during reasoning: instead of blindly passing context from Tool A to Tool B, the agent can invoke Bonfy in between, “Is this safe to share with this specific tool or destination, given who the data belongs to and where it’s going?” Every one of those checks is logged with what was inspected, which policies fired, what entities (customers, employees, consumers) were involved, and what decision was made, so you have an auditable trail across the entire chain – not just at the first prompt and the final email.

In practice, that audit looks less like a traditional API log and more like a data‑plane journal for the agent’s workflow: step‑by‑step records of the content the agent read, what it tried to send to each tool, the risk rating and labels Bonfy applied, and whether we allowed, modified, or blocked the action. Because it’s the same entity‑aware engine we use for email and SaaS, security teams can answer questions like “Which agents exposed EU customer data to external MCP servers last week?” with real evidence, instead of hoping the agent framework’s configuration pages tell the whole story.

Traditional SIEM and log analysis is built around human actors with consistent behavioral baselines. What needs to change about anomaly detection when your “actor” can spin up, complete a goal, and disappear in under 30 seconds?

When your “actor” is an AI agent that lives for 30 seconds, you still can’t anchor anomaly detection in long‑term agent behavior alone; you have to anchor it in content, context, and the human or system behind that agent instance.

Traditional SIEM assumes stable identities and patterns over days, weeks, and months; you baseline a human, then look for deviations. With agents, the pattern is inverted: the agent identity is ephemeral, but the data it touches, the user it may be acting on behalf of, and the trust boundaries it crosses are very real and often persistent. So anomaly detection has to move from “Is Alice behaving strangely today?” to “Is this combination of content, destination, and actor – human, system, or agent instance – acceptable for our business context right now?”

That’s exactly where Bonfy focuses. We analyze the unstructured content itself, enriched with entity awareness – which customer, which consumer, which product line, which regulatory regime – and correlate that with who or what is acting: an employee, a service account, a Copilot scenario, or a short‑lived AI agent, plus the relationship between them. Even if the agent spins up and down in under a minute, the data trail it creates across email, SaaS apps, collaboration tools, and AI systems is visible through a single, contextual lens.

We then model both humans and agents – and the links between them – as first‑class entities in our Knowledge Graph, so you can attribute risky patterns not just to a transient agent ID, but to the user behind it (where applicable), specific agents or agent classes, and their role in the broader business context. Over time, you’re no longer flying blind with thousands of invisible bots; you’re managing a portfolio of human and non‑human actors and their relationships, all evaluated through the same data‑centric risk model.

Multi-agent systems, where one agent orchestrates several others, introduce a delegation chain. How do you prevent a compromised sub-agent from poisoning the trust relationship with the orchestrator?

The uncomfortable answer is that in most multi‑agent systems today, nobody is really protecting that delegation chain, they’re trusting that if the orchestrator is “good,” everything downstream will behave. From Bonfy’s point of view, you have to flip that: you treat every sub‑agent call as untrusted from a data perspective and give the supervising agent the tools to inspect what goes in and what comes out before it accepts or forwards anything.

Concretely, the orchestrator should never blindly consume a sub‑agent’s output. At least from a data perspective, we give the supervising agent a Bonfy MCP tool it can call inline to inspect the sub‑agent’s input and output and verify it does not violate any policy, including confidentiality, privacy, and data‑integrity checks such as “does this summary suddenly include another customer’s data or unexpected PHI?” The orchestrator’s prompt literally encodes this behavior: “Delegate to sub‑agents, but before acting on their results or passing them on, verify with Bonfy that the content is safe for this destination and business context.”

Because Bonfy looks at the content itself, enriched with entity awareness, which customer, which consumer, which product line, which jurisdiction, it can flag when a compromised or mis‑behaving sub‑agent tries to inject sensitive or inconsistent data into the chain, even if all the agents are short‑lived and share a generic identity in the framework. All of those checks are logged on the same platform we use for email and SaaS: you get an audit trail of which orchestrator called which sub‑agent, what data flowed, what policies triggered, and whether Bonfy allowed, modified, or blocked the orchestrator’s next step. In other words, we’re not trying to “trust” the delegation chain into behaving – we’re instrumenting it so that any sub‑agent output has to pass a data‑centric policy gate before it can poison the rest of the workflow.

AI agents frequently rely on MCP servers, plugins, and third-party tool integrations. That ecosystem is growing faster than anyone can vet. Are we sleepwalking into a supply chain crisis?

We’re not just sleepwalking into a supply chain crisis, in many enterprises, we’re already there. We’ve just decided to trust whatever tool an agent feels like calling.

An MCP server or plugin is effectively a black-box micro‑vendor that your agents can hand sensitive data to in the middle of a workflow. In a typical environment you can have dozens or hundreds of these tools (internal services, third‑party APIs, enrichment feeds) all being orchestrated dynamically by LLMs with no human in the loop and very little security review. From a data‑security perspective, every one of those tools is now part of your AI supply chain, but very few organizations treat them that way.

Most of the early “AI agent security” market is focused on configuration posture: what agents exist, which tools they can see, which permissions they’re granted. That’s necessary, but it’s not sufficient, because just like any other software, those tools can be used safely or abused depending on what data flows through them. We deliberately focus on the data layer instead of just the configuration layer: what content is being sent to which MCP server, which entities it refers to, which jurisdictions it touches, and whether that combination is acceptable for your business and regulatory context.

Concretely, we give organizations three levers. First, we control grounding at the data source with granular, contextual labeling so you can prevent certain classes of information, say PHI, EU PII, or customer‑specific deal terms, from ever being eligible for a given agent or tool in the first place. Second, we monitor and enforce on the way out, analyzing emails, files, SaaS updates, and other outputs generated via agents, regardless of which plugins they used along the way. And third, through our own MCP server, we let agents ask us in real time, “Is this safe to send to this tool or this destination?” before data is handed off to a third‑party service.

So yes, there is an AI‑era supply chain problem building up around MCP servers and plugins, but the way out is not to freeze innovation or somehow vet every tool in the ecosystem. It’s to put data‑centric guardrails in place so that, no matter how fast the agent ecosystem grows, sensitive content is governed consistently across every agent, every tool, and every workflow.

Model providers update their weights, sometimes silently. An agent that behaved one way on Monday may behave differently on Friday with no change to your own code. How should security teams be thinking about model versioning as a compliance and risk issue?

Security teams need to assume the model is a moving part of the supply chain, not a fixed component they can fully certify once and forget.

You may not control when your provider tweaks weights, safety layers, or routing, but you can control the data guardrails around whatever model happens to sit behind an endpoint. For us, that starts with treating model versioning as a compliance‑relevant change: you want to know which classes of data each application can send to “an LLM,” and you want evidence that, regardless of whether that’s Model X on Monday or Model Y on Friday, the same policies are being enforced on prompts, retrieved documents, tool calls, and outputs.

Our approach is intentionally model‑agnostic. We don’t embed ourselves into a specific customer model; we operate as a customer‑agnostic AI data‑security layer that inspects content in and out of agents, copilots, and LLMs using our own entity‑aware engine. As Gidi has emphasized, the fact that Bonfy operates with customer‑agnostic AI models means you can apply the same safeguards even if your underlying LLM usage changes over time – knowingly or not – because the policies live in our platform, not in a particular model checkpoint.

From a risk and compliance perspective, that gives you two critical things. First, a stable, auditable layer: you can show regulators and auditors a consistent record of which sensitive, regulated, or customer‑specific data was allowed or blocked at the data plane, even as model versions evolved behind the scenes. Second, a way to detect when model behavior shifts in risky ways – for example, suddenly including more granular customer details in summaries – because Bonfy continues to classify, label, and enforce policies on the content itself, independent of which model produced it.

Some vendors are marketing “secure AI agents” almost as a feature checkbox. What does rigorous agent security look like, and how does a security buyer cut through the noise?

“Secure AI agents” is not a checkbox, it’s an end‑to‑end discipline that has to follow the data wherever agents read, reason, call tools, and write.

From our perspective, rigorous agent security has three pillars. First, you control the grounding: which content an agent is allowed to see in the first place, using granular, contextual labeling and access rules on systems like SharePoint, email, CRM, and other SaaS apps. Second, you protect data in‑use during the agent’s reasoning: when it calls MCP servers, plugins, or internal APIs, you need inline inspection that can tell you whether it’s about to hand PHI, customer‑specific details, or regulated content to the wrong tool or third party. Third, you govern the outputs: emails, files, tickets, and other artifacts the agent generates must be checked for leakage and policy violations before they hit a human or an external system.

Where buyers get lost is that a lot of “agent security” offerings stop at configuration posture – listing agents, toggling tools, managing permissions – without ever truly seeing what data flows through those automations. That’s necessary hygiene, but it won’t save you from an agent that’s perfectly “configured” and still exfiltrates customer data via an allowed MCP plugin. Bonfy deliberately focuses on the data layer instead of just the control plane: the same entity‑aware engine we use for email and SaaS applies to agent prompts, retrieved documents, MCP calls, and outputs, with one set of policies governing humans, systems, and AI agents alike.

If you’re a security buyer trying to cut through the noise, we’d suggest three simple tests. Ask vendors: Can you see and classify the actual content flowing into and out of my agents, across all my major channels – not just log which tools they’re allowed to call? Can you enforce policy consistently for both humans and agents, so that “this customer’s data cannot leave this boundary” is true everywhere? And can my agents query your platform in real time – for example via an MCP server – to check whether a given action is compliant before they execute it? If the answer to any of those is no, you’re looking at a checkbox, not rigorous agent security.

If you could change a thing about how the security industry is approaching AI agent risk, before we reach a major public breach that forces the conversation, what would it be?

I’d change one thing: stop treating AI agent risk as an abstract “future AI problem” and start treating it as a very concrete data problem that is already in production today.

Right now, AI adoption is outpacing governance; agents are already reading, transforming, and generating sensitive content across email, SaaS apps, internal systems, and MCP‑connected services, while most organizations have no unified visibility into what data those agents touch. The industry is pouring energy into models, prompts, and configuration posture, but far less into a basic question: where is my confidential, regulated, or customer‑specific information flowing as these automations execute multi‑step workflows?

From Bonfy’s perspective, the shift we need is to put data at the center of the agent‑risk conversation. That means building systems that can see and classify unstructured content wherever it moves, understand the people, customers, and jurisdictions behind that content, and apply consistent policy whether the actor is a human, a SaaS app, or an ephemeral AI agent. It also means giving agents a way – via mechanisms like our MCP server interface – to ask in real time, “Is this safe to send or store here?” instead of assuming their toolchain will do the right thing by default.

If we make that mental shift now, we don’t have to wait for a headline‑grabbing breach to discover that we were effectively flying blind while AI automated the movement of our most sensitive information.

For a CISO who is being pressured by the business to deploy AI agents at scale while simultaneously being held responsible for data security outcomes, what is the most honest advice you can give them?

The most honest thing we can tell a CISO in that position is: do not accept “deploy first, figure out the data risk later” as the operating model, even if that’s the pressure you’re under.

You’re not going to stop AI agents; you can, however, insist on a phased rollout where the first deliverable is visibility, not automation. Start by instrumenting the channels where agents will read and write – email, collaboration, SaaS apps, internal systems, MCP‑connected tools – so you can see what sensitive, regulated, or customer‑specific content they would touch if you turned them fully loose. That real data gives you the leverage to have an adult conversation with the business: “Here is where we can safely automate today, here is where we need guardrails, and here are the use cases that stay human‑in‑the‑loop for now.”

From there, move deliberately from visibility to policy to prevention. Use entity‑aware controls so the same policies apply whether the actor is a person, a SaaS workflow, or an AI agent, and give agents a way to call into a service like Bonfy’s MCP interface to check content in‑flight rather than trusting static configuration alone. That lets you say “yes” to AI at scale, but on your own terms – with measurable controls, auditable decisions, and a defensible story when the board or regulators ask how you kept data safe in an agent‑driven world.

If you’re a CISO being told to ‘deploy AI agents now and keep all the data safe,’ don’t argue AI versus no AI – insist on the order of operations. First you turn on deep visibility into where sensitive, regulated, and customer‑specific content flows across email, SaaS, collaboration, and agents; then you layer in policies; only then do you allow large‑scale automation with agents calling a service like Bonfy in‑flight to check ‘Is this safe to send or store here?’ before they act. That’s how you say yes to AI at scale without betting the company on blind trust in someone else’s configuration.


from Help Net Security https://ift.tt/aPJAIoG

GitLab CI/CD pipelines often accumulate configuration decisions that drift from security baselines over time. Container images get pinned to mutable tags, branches lose protection settings, and required templates go missing. An open-source tool called Plumber automates the detection of those conditions by scanning pipeline configuration and repository settings directly.

GitLab CI/CD compliance scanner

What Plumber checks

Plumber reads a project’s .gitlab-ci.yml file and queries the GitLab API to produce a compliance report. It includes eight controls that teams can enable, disable, or configure through a .plumber.yaml file in their repository.

The controls cover container image tags (flagging mutable references like latest, dev, main, and master), container image registries (confirming images come from sources defined as trusted in the configuration), and branch protection (checking whether critical branches enforce minimum access levels and block force pushes).

Additional controls verify that pipeline jobs come from includes or components rather than being defined directly in the CI file, that included templates and components are up to date, that version references for includes do not use mutable identifiers, and that required components or templates are present.

The tool connects to a GitLab instance via the API using a personal access token. The token requires read_api and read_repository scopes and must belong to a user with Maintainer-level access or higher on the project being scanned.

Two deployment paths

Plumber can run as a standalone command-line binary or as a GitLab CI component added directly to a pipeline. The CLI path suits local testing or one-off scans. The CI component path runs automatically on every pipeline execution against the default branch, tags, and open merge requests.

Adding Plumber as a CI component requires two lines in a .gitlab-ci.yml file and a GITLAB_TOKEN variable configured in the project’s CI/CD settings. A configurable threshold (defaulting to 100 percent) determines whether the job passes or fails. Teams can lower the threshold during adoption and tighten it over time.

Output appears as a colorized terminal report and optionally as a JSON file suitable for audit records or downstream tooling.

Installation and availability

Plumber is written in Go and released under the Mozilla Public License 2.0. Binaries are available for Linux, macOS, and Windows. Installation is supported via Homebrew, Mise, direct binary download, and Docker. Building from source requires Go 1.24 or later.

Organizations running self-hosted GitLab instances can import the Plumber repository into their own infrastructure and publish it as a CI/CD catalog resource for internal use.

Plumber is available for free on GitHub.

Must read:

Subscribe to the Help Net Security ad-free monthly newsletter to stay informed on the essential open-source cybersecurity tools. Subscribe here!


from Help Net Security https://ift.tt/tQTi5yf

DNS infrastructure underpins nearly every network connection an organization makes, yet security configurations for it have gone largely unrevised at the federal guidance level for more than twelve years. NIST published SP 800-81r3, the Secure Domain Name System Deployment Guide, superseding a version that dates to 2013.

NIST DNS Security Guide

The document covers three main areas: using DNS as an active security control, securing the DNS protocol itself, and protecting the servers and infrastructure that run DNS services. It is directed at two groups: cybersecurity executives and decision-makers, and the operational networking and security teams who configure and maintain DNS environments.

DNS as a security enforcement point

The updated guidance places significant emphasis on protective DNS, a term the document uses to describe DNS services enhanced with security capabilities that can analyze queries and responses and take action against threats. Protective DNS can block connections to malicious domains, filter traffic by category, and generate query logs that support digital forensics and incident response.

The document describes two general deployment models: cloud-based protective DNS services and on-premises deployments using DNS firewalls or Response Policy Zones (RPZs). A hybrid approach combining both is recommended where feasible, on the basis that a cloud outage with on-premises fallback still preserves protection.

Cricket Liu, EVP Engineering, Chief DNS Architect and Senior Fellow at Infoblox, which co-authored the publication, offered Help Net Security additional detail on RPZ deployment practice. “When deploying Response Policy Zones, one of the most common protective DNS mechanisms, we advise organizations to create a local RPZ to override RPZs that they might get from other organizations,” he said. “Organizations would typically use the local RPZ to whitelist their internal namespace, and then can add individual domain names to the whitelist that might be erroneously blocked.”

The guidance recommends that protective DNS logs be integrated with SIEM or log analysis platforms, and that DNS query data be correlated with DHCP lease histories to map IP addresses to specific assets during incident response.

Encrypted DNS changes the role of the DNS server

The publication dedicates substantial attention to encrypted DNS, covering three protocols: DNS over TLS (DoT), which runs on TCP port 853; DNS over HTTPS (DoH), which runs on TCP and UDP port 443; and DNS over QUIC (DoQ), which runs on UDP port 853. All three encrypt communication between stub resolvers and recursive DNS servers and optionally support server authentication.

The U.S. government requires Federal Civilian Executive Branch agencies to use encrypted DNS when communicating with agency endpoints, wherever technically supported. The guidance notes that organizations may need to configure browsers and other applications that implement their own encrypted DNS so that local resolvers used for control and logging are not bypassed.

Liu pointed to a structural consequence of encrypted DNS deployment that the guidance reinforces. “The DNS server itself becomes critical when encrypted DNS is deployed,” he said. “The DNS server becomes the detection and enforcement point. Additionally, organizations can retrieve and store Passive DNS data collected from its DNS servers, which allows them to detect threats, log responses, and more.”

The publication advises organizations to block unauthorized DoT traffic using firewall rules on TCP port 853, and to use RPZs combined with firewall rules to restrict unauthorized DoH traffic, which is more difficult to block given its use of port 443. Mobile device management tools are recommended for enforcing approved DNS configurations on endpoints.

DNSSEC guidance updated for current algorithms

The publication updates DNSSEC signing recommendations to reflect current cryptographic standards. It presents a table of supported algorithms drawn from RFC 8624 and NIST SP 800-57, covering RSA with SHA-256, ECDSA P-256 and P-384, and the Edwards-curve algorithms Ed25519 and Ed448. The document notes a preference for ECDSA and Edwards-curve algorithms over RSA, on the basis that smaller key and signature sizes help keep DNS response sizes within limits that avoid requiring TCP.

DNSSEC signing keys are categorized as signature keys with a recommended maximum lifetime of one to three years. The document recommends keeping RRSIG validity periods short, on the order of five to seven days, to limit the window during which a compromised key can be used to forge responses. Hardware security modules are recommended for storing private keys, particularly key-signing keys, where practical.

The guidance recommends NSEC over NSEC3 for authenticated denial of existence, noting that NSEC3’s computational overhead is generally not justified by the protection it provides against zone walking. Organizations required by policy to use NSEC3 are directed to RFC 9276 for parameter settings that reduce denial-of-service risk.

Post-quantum cryptographic algorithms are not yet specified for DNSSEC use. The document notes that administrators should plan to migrate once specifications and tooling are available.

Authoritative server hygiene and zone management

The guidance describes several threat categories specific to authoritative DNS services. Dangling CNAME records, in which a CNAME points to a domain no longer registered or controlled by the organization, can allow threat actors to take over resolution for that name. Lame delegations, where a subdomain is delegated to name servers that are no longer authoritative for it, create a similar exposure and can enable domain hijacking through a hosting provider.
The document recommends active monitoring of domain registrations to detect look-alike or typosquat domains. It also recommends that organizations maintain retired domain delegations in a parked state for a period of time to prevent re-registration by attackers.

TTL values are recommended in the range of 1,800 seconds (30 minutes) to 86,400 seconds (one day) for most DNS data. A TTL of zero is explicitly prohibited, and values below 30 seconds are discouraged for DNSSEC-signed records.

What the NIST DNS security guide covers on infrastructure architecture and availability

The publication repeats a recommendation from earlier versions that authoritative and recursive functions be separated on internet-accessible servers. An internet-facing name server configured for both functions is described as a security risk.

NIST recommends deploying at least two authoritative name servers on different network segments, and dispersing them geographically across different physical sites. A hidden primary authoritative server, one that does not appear in the zone’s NS record set, is recommended to reduce the primary server’s exposure to direct attack.

DNS servers should run on dedicated infrastructure, separate from other services, to reduce the attack surface and ensure adequate resources for logging, encrypted DNS, and protective DNS functions. Where full separation is not practical, combining DNS with closely related core services such as DHCP is described as an acceptable alternative.

Webinar: The True State of Security 2026


from Help Net Security https://ift.tt/ZeFdzug

We may earn a commission from links on this page.

Back in November, Google made a stunning announcement: Quick Share was suddenly compatible with Apple's AirDrop. At the time, the compatibility was limited to the Pixel 10, but no matter: Google had just made history, transforming the sharing features from platform-specific to cross-platform.

While AirDrop and Quick Share have long been the most convenient ways to share large files between devices, it only worked if you and your friend were on the same OS. That limitation introduces some inconvenient friction, but, as support cross-platform expands, that friction is easing up. Google first announced plans for greater AirDrop compatibility in Quick Share last month, when Android Vice President of Engineering Eric Kay noted, "[i]n 2026, we're going to be expanding [Airdrop support] to a lot more devices."

While there's no official timeline on which devices will gain support and when, Nothing has said it is "exploring" adding it, while Qualcomm "can't wait" to add the feature to its Snapdragon chips. However, we do now know one Android device that will support AirDrop very soon: the Samsung Galaxy S26.

AirDrop support comes to the Galaxy S26 series

Samsung made the news official on March 22 (technically Monday, March 23 in Korea). At launch, AirDrop support will only work on the Galaxy S26 series, including the Galaxy S26, Galaxy S26 Plus, and Galaxy S26 Ultra. That's a bummer for Galaxy S25 users and earlier, but it is possible Samsung will expand support in time. After all, Google started rolling out AirDrop support for Pixel 9 devices late last month.

If you do have a Galaxy S26 device, this feature is live right now—if you live in Korea. Samsung says the feature will be rolling out to the U.S. later this week, but as of this writing, the update is only available to Galaxy users in Korea. (I'll update this piece when Samsung releases the update for those of us in the U.S.)

How to enable AirDrop support in Quick Share on Galaxy

galaxy s26 windows
Credit: Images courtesy of Samsung

If you have an S26, and the update has rolled out to you, you just need to head to Settings > Software update (or System updates), then hit "Download and install," "Check for system updates," or "Check for software updates," depending on your device.

Then, once your phone has the update, you'll need to manually enable AirDrop support for Quick Share—it won't just appear on your phone. To do so, head to Settings > Connected devices > Quick Share, then toggle on the new "Share with Apple devices" option.

If you tap the option itself, you'll find a full description of the feature, which you may or may not already know: The recipient needs to have their iPhone's AirDrop settings set to "Everyone," and when you want to receive a file, you need to open Quick Share on your end. Samsung says your phone may temporarily disconnect from wifi when looking for or sharing to other iPhones.


from Lifehacker https://ift.tt/XjT0GPm

Here’s an overview of some of last week’s most interesting news, articles, interviews and videos:

Week in review

What smart factories keep getting wrong about cybersecurity
In this Help Net Security interview, Packsize CSO Troy Rydman breaks down the biggest vulnerabilities in smart factory environments today, from IoT devices and legacy systems to human error. He explains how unmanaged devices, from sensors to robotic components, often go unpatched and become entry points for attackers.

Certificate lifespans are shrinking and most organizations aren’t ready
The push for shorter TLS certificate lifespans has grown for years. Google first promoted 90-day certificates, and Apple later proposed 47-day ones, prompting the CA/Browser Forum to set a formal timeline. That plan cuts validity from one year to 200 days, then 100, and finally 47, forcing organizations to rethink certificate purchasing and management.

Stop building security goals around controls
In this Help Net Security interview, Devin Rudnicki, CISO at Fitch Group, argues that security strategy fails when it loses its connection to business outcomes. Rudnicki walks through how to align security goals with corporate priorities, why CISOs must present risk in terms leadership can act on, and how to balance innovation speed with measured risk.

AI got it wrong with high confidence. Now what?
In this Help Net Security interview, Christian Debes, Head of Data Analytics & AI at SPRYFOX, talks about the growing gap between what AI models do and what their operators can explain. He argues this gap is already a liability, particularly when decisions affect people or money and no one can say why a model produced a certain output.

Field workers don’t need more access, they need better security
In this Help Net Security interview, Chris Thompson, CISO at West Shore Home, discusses least privilege and credential hygiene for a field-based workforce. He covers access management, authentication practices, and data risk processes that support employees in the field. Thompson also outlines security awareness efforts and how field teams are integrated into an organization’s security posture.

CISA warns of active exploitation of Microsoft SharePoint vulnerability (CVE-2026-20963)
CVE-2026-20963, a remote code execution (RCE) SharePoint vulnerability Microsoft fixed in January 2026, is being exploited by attackers. The confirmation comes from the US Cybersecurity and Infrastructure Security Agency (CISA), which added the flaw to its Known Exploited Vulnerabilities (KEV) catalog on Wednesday.

DarkSword: Researchers uncover another iOS exploit kit
A powerful iPhone hacking toolkit dubbed “DarkSword” has been used since November 2025 to compromise devices by exploiting zero-day iOS vulnerabilities, Google researchers have shared. Two weeks ago, Google Threat Intelligence Group (GTIG) and iVerify disclosed the existence of Coruna, a spy-grade iOS exploit kit that has been used in a commercial surveillance operation, by state-linked threat actors engaged in cyber espionage, and cybercriminals.

Unpatched ScreenConnect servers open to attack (CVE-2026-3564)
ConnectWise has patched a critical vulnerability (CVE-2026-3564) that could enable attackers to hijack ScreenConnect sessions by abusing ASP.NET machine keys to forge trusted authentication. The ScreenConnect remote access platform is popular with managed service providers, IT departments, and technology solution providers. They can opt for the cloud-hosted version or can deploy it on their own servers or in their private cloud.

Cisco FMC flaw was exploited by Interlock weeks before patch (CVE-2026-20131)
A critical vulnerability (CVE-2026-20131) in Cisco Secure Firewall Management Center (FMC) that Cisco disclosed and patched in early March 2026 has been exploited as a zero-day by the Interlock ransomware gang, Amazon CISO and VP of Security Engineering CJ Moses revealed.

What to do in the first 24 hours of a breach
In this Help Net Security video, Arvind Parthasarathi, CEO of CYGNVS, walks through a 10-step process for handling a cybersecurity breach. The first five steps cover preparation, while the next five address what to do once a breach is underway.

Cloud misconfiguration has evolved and your controls haven’t
In this Help Net Security video, Kat Traxler, Principal Security Researcher – Public Cloud at Vectra AI, walks through two AWS misconfigurations that go beyond the basics of bucket visibility. The first is bucket name squatting, and the second is the cross-service confused deputy problem.

Fake scandal clips on Facebook bait victims into investment scams
Bitdefender researchers uncovered hundreds of scam campaigns promoted through Facebook ads that use fake news stories, celebrity impersonation, and redirect chains to funnel victims into investment fraud schemes. The activity ran through 310 malvertising campaigns distributed on Meta platforms from February 9 to March 5, 2026. The campaigns generated more than 26,000 ad sightings with localized content in more than 15 languages.

45,000 malicious IP addresses taken down, 94 suspects arrested
An international law enforcement operation has taken down more than 45,000 malicious IP addresses and servers linked to phishing, malware, and ransomware activity. The action was carried out as part of Operation Synergia III, an investigation that ran from July 18, 2025 to January 31, 2026.

Hackers tried to breach Poland’s nuclear research centre
Poland’s National Centre for Nuclear Research (NCBJ) thwarted a cyberattack targeting its IT infrastructure. The attempted intrusion was detected and blocked before attackers could compromise systems or disrupt operations.

Meta ditches end-to-end encrypted messaging on Instagram
End-to-end encrypted messaging on Instagram will no longer be supported after May 8, 2026. Meta justified the move by saying the feature was rarely used, with only a small fraction of Instagram users enabling encryption. The company advised users seeking end-to-end encryption to switch to WhatsApp, where it is enabled by default.

Hidden instructions in README files can make AI agents leak data
Developers rely on AI coding agents to set up projects, install dependencies, and run commands by following instructions in repository README files, which provide setup guidance for software projects. New research identifies a security risk when attackers hide malicious instructions in those documents.

Millions of UK firms on alert after Companies House data exposure
Companies House, the UK’s official company registry, said its WebFiling service is back online after being shut down on Friday to fix a security issue that may have exposed the personal data of millions of firms. An investigation indicates the flaw was likely introduced during an October 2025 update.

EU sanctions Chinese company behind 65,000-device hack
The EU Council has sanctioned companies from China and Iran, along with two individuals, over cyberattacks targeting its member states and partners. With the latest listings, the EU cyber sanctions regime applies to 19 individuals and 7 entities.

Global fraud losses climb to $442 billion
Online fraud is reaching more victims and generating larger losses, driven by digital tools and organized networks operating across borders. In INTERPOL’s March 2026 Global Financial Fraud Threat Assessment, financial fraud sits among the top five global crime threats, with a 54% rise in fraud related Notices and Diffusions from 2024 to 2025.

Big tech companies step in to support the open source security ecosystem
Backed by new funding commitments from major technology players, open source security efforts are moving beyond threat identification toward practical solutions for defenders. The Linux Foundation announced $12.5 million in grant funding backed by Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI to strengthen open source security.

Apple starts issuing lightweight security updates between software releases
Apple is delivering small security updates, called Background Security Improvements, starting with iOS 26.1, iPadOS 26.1, and macOS 26.1. Apple describes Background Security Improvements as lightweight security releases for components such as Safari, the WebKit framework, and other system libraries, delivered through ongoing patches between software updates.

Firefox is getting a free built-in VPN
Privacy concerns often follow free VPN services, especially when unclear data practices put user information at risk. Mozilla says its version is grounded in its data principles and focus on trust, aiming to avoid the kinds of arrangements that have raised questions in the past. Privacy concerns often follow free VPN services, especially when unclear data practices put user information at risk. Mozilla says its version is grounded in its data principles and focus on trust, aiming to avoid the kinds of arrangements that have raised questions in the past.

Elite members of North Korean society fake their way into Western paychecks
Increased federal activity, including indictments over the past year, has drawn attention to a pattern that has been unfolding inside corporate hiring pipelines. North Korean nationals are securing roles as remote IT contractors and full-time staff within organizations across North America and Western Europe, using standard hiring channels to get in.

Samba 4.24.0 ships Kerberos hardening and a CVE fix for domain encryption defaults
Samba 4.24.0 arrived carrying a set of Kerberos security changes aimed at Active Directory deployments. The release fixes a vulnerability, extends audit coverage for sensitive AD attributes, and introduces configuration options to counter two related Kerberos impersonation techniques.

900,000 contact records exposed in Aura data breach
Aura, the online safety service, confirmed that an unauthorized party accessed about 900,000 records, mostly names and email addresses from a marketing tool linked to a company it acquired in 2021. The incident occurred as a result of a targeted phone phishing attack that tricked one of the employees.

Secure endpoint management systems immediately, CISA urges
The US Cybersecurity and Infrastructure Security Agency (CISA) warns that the cyberattack on Stryker Corporation serves as a signal to U.S. organizations that foreign cyber activity tied to Middle East conflicts may be spilling into their operations. Attackers breached Stryker’s internal Microsoft environment and reportedly wiped 200,000 systems, servers, and mobile devices, while extracting 50 terabytes of data.

4chan shrugs off UK regulator, refuses to pay £520,000 in fines over online safety violations
The U.K.’s media regulator Ofcom fined 4chan £450,000 under the Online Safety Act for failing to introduce age checks to stop children from accessing pornographic content on its platform. 4chan is an online forum notorious for its extreme right-wing content, gory videos, and non-consensual pornography.

Authorities disrupt four IoT botnets behind record DDoS attacks
The U.S. Justice Department and international partners have disrupted four IoT botnets linked to DDoS attacks that reached 30 terabits per second, among the largest ever recorded. The four botnets targeted in the operation—Aisuru, KimWolf, JackSkid and Mossad—infected millions of devices worldwide, primarily IoT systems such as digital video recorders, web cameras and WiFi routers.

Terminated contract led to $2.5 million cyber extortion scheme
A federal jury convicted Cameron Curry, 27, a Charlotte resident, of carrying out an extensive cyber extortion scheme targeting a Washington, D.C.-based international technology company. He faces up to two years in prison on each of the six charges.

VulHunt: Open-source vulnerability detection framework
Binarly has published VulHunt Community Edition, making the core scanning engine from Binarly’s commercial Transparency Platform available to independent researchers and practitioners. VulHunt Community Edition is a framework for detecting vulnerabilities in compiled software. It operates against multiple binary representations simultaneously, working across disassembly, an intermediate representation layer, and decompiled code. Targets include POSIX executables and UEFI firmware modules.

Microsoft Edge 146 adds IP privacy and local network access controls
Microsoft Edge version 146 (Stable) became available on March 13, 2026, bringing updates to tracking protection, IP privacy, and enterprise network security policies.

Microsoft zeroes in on AI-driven data risks in Fabric
New Microsoft Purview innovations for Microsoft Fabric help organizations secure data and accelerate AI adoption. The updates focus on identifying risks, preventing data oversharing, and strengthening governance and data quality across the data estate.

Your APIs are under siege, and attackers are just getting warmed up
Internet-facing systems are handling sustained levels of malicious traffic across APIs, web applications, and DDoS channels. Akamai’s State of the Internet security report places these patterns within the same operating environment, with activity increasing across each area through 2025.

Betterleaks: Open-source secrets scanner
Secrets scanning has become standard practice across engineering organizations, and Gitleaks has been one of the most widely used tools in that space. The author of that project has now released a new tool called Betterleaks, which is designed to scan git repositories, directories, and standard input for leaked credentials, API keys, tokens, and passwords.

Java 26 ships with new cryptography API and HTTP/3 support
Oracle released JDK 26, the 17th consecutive feature release delivered under the six-month cadence the project adopted in 2018. The release includes ten JDK Enhancement Proposals spanning language changes, garbage collection improvements, cryptographic tooling, and network protocol support.

EDR killers are now standard equipment in ransomware attacks
Ransomware attackers routinely deploy tools designed to disable endpoint detection and response software before launching encryptors. These tools, known as EDR killers, have become a standard component of ransomware intrusions. ESET Research tracked nearly 90 EDR killers actively used in the wild.

Google limits Android accessibility API to curb malware abuse
Google is restricting how Android apps can use accessibility features after years of abuse by banking Trojans and mobile malware. The changes, introduced in Android 17.2, limit access to the accessibility API when Advanced Protection Mode (APM) is enabled. Apps that do not serve a core accessibility function can no longer use these services, closing off a common attack vector.

Llamafile, Mozilla’s portable LLM runner, gets GPU support and a rebuilt core
Running a large language model on a single machine without cloud access or a container runtime remains a priority for practitioners working in air-gapped or resource-constrained environments. Llamafile, Mozilla-AI’s project for packaging and running LLMs as self-contained executables, has received its most significant architectural overhaul to date with version 0.10.0.

Fake AI songs streamed billions of times, netting fraudster $10 million
Michael Smith, 54, of Cornelius, North Carolina, has pleaded guilty in federal court to running a scheme that exploited music streaming platforms and diverted royalty payments from artists. He admitted to one count of conspiracy to commit wire fraud, which carries a maximum sentence of five years in prison, and agreed to forfeit $8,091,843.64.

Google slows Android sideloading to trip up scammers
Google’s advanced flow for Android changes how apps from unverified developers are installed, adding steps to reduce scam-driven sideloading. The feature is aimed at experienced users and allows sideloading through a controlled, one-time setup. It addresses scam scenarios where attackers pressure individuals to install malicious software.

Cybersecurity jobs available right now: March 17, 2026
We’ve scoured the market to bring you a selection of roles that span various skill levels within the cybersecurity field. Check out this weekly selection of cybersecurity jobs available right now.

New infosec products of the week: March 20, 2026
Here’s a look at the most interesting products from the past week, featuring releases from Intel 471, Kore.ai, NinjaOne, Pindrop, Secure Code Warrior, Token Security, and Xona Systems.


from Help Net Security https://ift.tt/AF4ZzrU

While Google has plans to severely restrict Android users' ability to download apps from sources other than the Google Play Store, the company is introducing a new process that will allow sideloading after a mandatory 24-hour waiting period. This new "advanced flow" setting is meant to prevent users from installing malware distributed by bad actors through unverified sources, while still allowing them to sideload from legitimate developers.

Sideloading restrictions are coming to Android

Last year, Google announced that sideloading on Android would eventually be limited to verified third-party app stores and developers. This change has a clear goal: cracking down on malicious apps impersonating real ones found on the Google Play Store. These restrictions—which go into effect for Brazil, Indonesia, Singapore, and Thailand later this year, and apply globally in 2027—will eventually require developers to register specific details with Google in order to distribute their apps, as well as pay a fee. (Students and hobbyists will be still able to share apps with up to 20 devices without registering or requiring users to go through the new workaround.)

This move was met with significant criticism from both developers and users, with concerns ranging from privacy infringement (developers now need to share details they didn't previously have to) to increased difficulty accessing modified or downgraded versions of apps. As such, Google is rolling out a compromise it feels will protect most users from malware, while allowing power users to sideload when they wish to.

Google is introducing a sideloading workaround

The new advanced flow setting will add multiple points of friction to unverified app installation, cutting into the sense of urgency scammers frequently use to distribute malware. Users will go through a one-time process to disable security protections—meaning you won't need to repeat it every time you want to sideload—but you'll still see a warning when you attempt to install an app from an unverified developer.

If you're interested in this workaround, you will first need to enable developer mode in your device's Settings app and confirm you are not being coerced into disabling security protections on your device (a common scam tactic). Next, you'll need to restart your phone, which shuts down calls and remote access tools scammers may use to communicate with you or control your device. From here, you'll have to wait 24 hours before you can return and authenticate the settings change using biometrics or your device PIN. Finally, you'll confirm you understand the risks, which then allows you to install apps from unverified developers for seven days, or indefinitely.

This workaround will be available starting in August—before developer registration requirements kick in.


from Lifehacker https://ift.tt/oAbrM3N