There’s an increased adoption of managed infrastructure services and the emergence of new cloud watering hole attacks, Accurics reveals.
Of all violations identified, 23 percent correspond to poorly configured managed service offerings – largely the result of default security profiles or configurations that offer excessive permissions.
Cloud environments most vulnerable to watering hole attack
As demonstrated by a recent high-profile hack, attackers increasingly strive to leverage weaknesses that enable them to deliver malware to end users, gain unauthorized access to production environments or their data, or completely compromise a target environment. This strategy is known as a watering hole attack, and researchers have seen them emerge in cloud environments where they can cause even more damage.
This is partly because development processes in the cloud that leverage managed services are not hidden inside the organization as they are in on-premise environments – in fact, they’re largely exposed to the world.
When criminals are able to exploit misconfigurations in development pipelines, it can spell disaster not only for the company but also its customers. To address this risk, enterprises should assume the entire development process is easily accessible, and restrict access to only the users who need it.
“Cloud native apps and services are more vital than ever before, and any risk in the infrastructure has critical implications,” said Accurics CTO & CISO Om Moolchandani.
“Our research indicates that teams are rapidly adopting managed services, which certainly increase productivity and maintain development velocity. However, these teams unfortunately aren’t keeping up with the associated risks – we see a reliance on using default security profiles and configurations, along with excessive permissions.
“Messaging services and FaaS are also entering a perilous phase of adoption, just as storage buckets experienced a few years ago. If history is any guide, we’ll start seeing more breaches through insecure configurations around these services.”
MTTR for violations is 25 days across all environments
On average, the research reveals that the mean time to remediate issues (MTTR) for violations is 25 days across all environments – a luxury for potential attackers. In this report, MTTR is particularly important as it pertains to drift – when configuration changes occur in runtime, causing cloud risk posture to drift from established secure baselines. For drifts from established secure infrastructure postures, the MTTR is 8 days overall.
Even organizations that establish a secure baseline when infrastructure is provisioned will experience drift over time, as happened in another well-publicized breach. While in this case the AWS S3 bucket was configured correctly at the time it was added to the environment in 2015, a configuration change made five months later to fix a problem was not properly reset once the work was complete. This drift went undetected and unaddressed until it was exploited nearly five years later.
Cloud infrastructure risks
- Kubernetes users who try to implement role-based access controls (RBAC) often fail to define roles at the proper granularity. This increases credential reuse and the chance of misuse – in fact, 35% of the organizations evaluated struggle with this problem.
- In Helm charts, 48% of problems came about through insecure defaults. Improper use of the default namespace – where system components run – was the most common mistake, which could give attackers access to the system components or secrets.
- Identity and Access Management defined through Infrastructure as code (IaC) in production environments was seen for the first time, and more than a third (35%) of the IAM drifts detected in this report originate in IaC. This indicates a rapid adoption of IAM as Code, which could lead to risk of misconfigured roles.
- Hardcoded secrets represent almost 10% of violations identified; 23% correspond to poorly configured managed services offerings.
- Of the organizations tested, 10% actually pay for advanced security capabilities that are never enabled.
- While the average time to fix infrastructure misconfigurations was about 25 days, the most critical portions of the infrastructure often take the most time to fix – for example, load-balancing services take an average of 149 days to remedy. Since all user-facing data flows through these resources, they should ideally be fixed the fastest, not the slowest.
Protecting cloud infrastructure requires a fundamentally new approach that embeds security earlier in the development lifecycle and maintains a secure posture throughout. The cloud infrastructure must be continuously monitored in runtime for configuration changes and assessed for risk.
In situations where configuration change introduces a risk, the cloud infrastructure must be redeployed based on the secure baseline; this will ensure that any risky changes made accidentally or maliciously are automatically overwritten.
With new attacks emerging and ongoing risks continuing to plague organizations, cloud cyber resilience is now more important than ever, and configuration hygiene is critical.
from Help Net Security https://ift.tt/3dDYml9
0 comments:
Post a Comment