Known unknowns: Refining your approach to uncategorized web traffic

By | 3:12 AM Leave a Comment

Cybersecurity is such a complex field that even the best-trained, best-equipped, and most experienced security managers will sometimes struggle to decide which of several paths to take.

uncategorized web traffic

Let’s consider uncategorized web traffic, for instance. I define this broadly as traffic involving sites that aren’t yet classified, can’t be classified (because they’re newly created or they involve parked or newly reactivated domains), or traffic that is (for now) unresolvable via standard domain name lookup.

Since users can and will travel the web as they see fit, they inevitably will browse an uncategorized site. Security managers therefore must create security policies to handle and secure it. And that’s not an easy thing to do because of the awkward question it introduces: How can you secure a class of traffic you don’t yet know anything about and can’t define?

Steering toward a balance

You can simply block, by default, all access to all uncategorized sites. But wielding such a blunt policy instrument is likely to introduce a range of problems for users trying to access legitimate sites, and negatively affect business velocity. On a system level, exception handling may become so common it’s not an exception, but a norm, creating various types of performance issues and exception handling fatigue.

On the other hand, if you ignore uncategorized traffic, you introduce a range of potentially serious issues and risks.

Ungoverned or outright malicious sites may be loaded with threats; users who visit them may inadvertently acquire and spread malware. As bad, or worse, is the potential that users will be fooled into ponying up their credentials, which the sites will harvest for sale to or immediate exploitation.

Newly-registered domains (NRDs) are a good example of how this situation can play out; they’re favored by malicious actors for exactly this reason. Slight variations of legitimate and trusted domains (that users don’t realize are variations) can create false confidence in users’ minds, leading to attacks including phishing, command-and-control exploits, data exfiltration, and ransomware.

This means that, for security managers, the challenge of uncategorized traffic is to find a suitable middle course: minimizing security risks and exception handling while maximizing the user experience.

That’s easier said than done, but I do have some recommendations. And since the specifics of implementation will obviously vary across tools and services, I’ll try to present them generally.

Best practices for handling uncategorized traffic

All uncategorized traffic should be subjected to TLS inspection. Given that over 90% of all internet traffic is encrypted today, inspecting uncategorized traffic is paramount to providing visibility into potentially malicious payloads or data exfiltration.

Managers can augment visibility by enabling suspicious new domain lookup to identify newly registered, newly observed, and newly reactivated domains. This will allow more control over newly computer-generated domains, domains inspired by current events, typosquatting domains, domains revived after a bankruptcy, and many other scenarios like these used for malware dissemination.

Leveraging dynamic content categorization to clean up uncategorized traffic. All secure web gateway (SWG) vendors have OEM content categorization service providers, but given the cloud scale they can leverage their data lakes and content examining technology to properly categorize traffic in its preferred category dynamically. This could be used to protect an enterprise from legal liability risks.

A logical next step would be to limit traffic to such sites via conservative policies. For instance, if an uncategorized site has an untrusted server certificate, managers can block any traffic with it. If uncategorized traffic can’t be inspected for any reason, that traffic should be blocked, too. Managers can also require minimum Transport Layer Security (TLS) versions for both clients and servers and refuse any transaction that falls shy of those requirements.

Services that provide not only threat analysis but also smart sandboxing give managers even more granular power. File downloads from uncategorized sites should be subjected to more rigorous examination. The capability to quarantine both malicious file downloads as well as suspicious file downloads that may not have a first-stage weaponized payload is unlikely to impact business velocity through content exception handling, but would provide further risk reduction.

Cautionary warnings are often helpful in cases of uncategorized traffic. Asking users to respond to a CAPTCHA-like challenge prior to completing a particular web transaction heightens their vigilance and forces them to think critically about the legitimacy of that transaction (and even potentially blocks malicious code that would otherwise broadcast to a botnet channel for further instructions).

File-type security controls are on point in three areas: endpoint protection (mainly by restricting downloads of executables/binaries), data loss prevention (by restricting uploads to sites), and in boosting end-user vigilance (a precious resource).

In terms of data loss prevention, security managers should consider blocking any upload of encrypted files (other than those known and required for business purposes) and blocking any upload in which the file exceeds a certain size. These precautions are doubly important for uncategorized sites.

Domain name service (DNS) analysis and control is also important. You will rarely see a business need for a mission-critical application hosted at a newly registered or revived domain. These domains are often part of attack chains in the form of termination points for DNS tunnel exfiltration or hosting drive-by malware. I would recommend blocking uncategorized traffic for all DNS requests and responses.

Remote browser isolation (RBI) can also reduce risk. When a user tries to access a potentially problematic website, RBI serves that user a digital rendering of that site. The pixels representing the site are then streamed to the user — not the site’s HTML/JavaScript/CSS files. Isolation can protect against compromise and exfiltration and should allow granular configuration of various isolation profiles that align with business usage models and risk.

Finally, I’d remind readers to enable smart isolation/predictive control if it’s available. Once trained on a large and suitable data set, machine insight like this works automatically and continuously. And over time, as it analyzes more and more data, it becomes increasingly accurate. AI/ML is particularly good at helping your team avoid credential-harvesting campaigns, because it’s much faster and better at recognizing skillfully constructed fake sites hosted on a lookalike domain than human beings tend to be.

There are many and varied approaches to uncategorized web traffic. As security leaders, we should always strive to minimize risk while enabling users to the greatest extent possible. Blunt-force, allow/deny dichotomies are problematic and outdated. Hopefully, the above serve as actionable steps for achieving more granular policy enforcement.


from Help Net Security https://ift.tt/rpfAyiH

0 comments:

Post a Comment