Monday, May 31, 2021

How do I select a data analytics solution for my business?

In today’s data driven world, an organization should have full insight into its database, not only because it is important to have control over it but also because it can drive business and strategy decisions based on trends and patterns. Data analytics is and in-depth way of knowing your data and making the most of it, while protecting your assets.

To select a suitable data analytics solution for your business, you need to think about a variety of factors. We’ve talked to several industry professionals to get their insight on the topic.

Joe Hellerstein, CSO, Trifacta

select data analytics solutionSuccessful data analytics incorporates everyone at the organization, including non-technical users across departments. Getting data and analytics right requires a comprehensive approach that connects data experts and domain experts, hand-coders and no-coders, to blend engineering discipline with business agility.

The first thing to look for in a data analytics product is that it empowers everyone. Modern cloud products, with zero setup and easy to use interfaces, often do this well. There are no “walls” in the cloud, which eliminates any artificial barriers between teams. And cloud solutions give organizations the flexibility to scale up and scale down depending on their needs.

Another crucial thing to seek is the ability to see the data. When users can visualize the data at every step – as they refine, transform and analyze it – the quality and speed of the results increase dramatically.

The most advanced products combine visual interaction with AI/ML intelligence to recommend quality rules and suggest transformations that guide users to the best outcomes. In these products, AI meets human intelligence in a “guide and decide” approach so every user can maximize the value from data, regardless of how technical they are.

Finally, look for products that empower extended teams by supporting code and no-code approaches, with robust sharing and collaboration capabilities to leverage and extend existing work.

Alan Jacobson, Chief Data and Analytics Officer, Alteryx

select data analytics solutionDigital transformation is enabled with world-class data and analytic software that can accelerate your knowledge workers’ journeys to become data-driven. The three top buying considerations should always include:

Usability: Can all of my knowledge workers actually use the technology? In the end, data and analytic solutions perform math. They will get the same answers to your questions; however, most solutions can’t be used by your accountant, your HR professional, or your marketing operations expert and instead can only be leveraged by data scientists. This simply won’t get the best outcome, as you do not have enough data scientists to answer all your business questions.

Breadth: Does the solution span the full continuum your business needs? Can it help clean and manipulate data from any source? Can you perform geospatial analysis? Can you easily automate a process? An end-to-end platform that solves for all of your data and analytic needs can replace any disconnected, clunky point tools that only perform specific functions.

Outcomes: Are you able to quickly benchmark and see the outcomes the technology is delivering today? Are you able to trial the product for 30 days and get outcomes yourself? With an outcome-first approach, you can spend more time focusing on business impact and less time upfront completing tedious back-end work.

Vishal Kasera, Senior Director of Product, ThoughtSpot

select data analytics solutionWhen selecting a data analytics solution for your business, it’s important to look at the use case for which you’re trying to solve. Are you trying to grant C-suite individuals access to macro-level analytics, or are you interested in putting actionable insights in the hands of the everyday business person for them to directly react to?

I’d argue enterprises benefit most when all users are empowered to access, interact and derive insights from data themselves. This leads to increased productivity and efficiency, and inspires rapid, data-driven, decision-making which drives organizations’ bottom lines.

Business and data leaders need to prioritize a solution that offers a familiar consumer-grade experience which is simple and intuitive for all users. Asking them to learn a tool that requires multiple days of training means that the analytics solution is dead in the water. When it’s as simple as search, however, people instinctively start unlocking value.

It’s even more powerful when users can then use those insights to automatically trigger action in other apps or services. All of this has to happen with large volumes of data without sacrificing the controls, security, and governance requirements associated with managing the variety and volume of data today.

Monzy Merza, VP Security GTM, Databricks

select data analytics solutionData driven companies are 19 times more likely to achieve above average profitability. Data analytics solutions drive decisions that create competitive advantages. Choosing the right analytics platform isn’t just an IT decision, it’s a strategic imperative that will impact the daily work across the organization. Making a sub-optimal choice will be reflected on your balance sheet.

The right solution must fit your business culture, be aligned with your IT strategy, be built on modern tech and be economically advantageous. Culturally, the solution must provide fast time to value to business users, operational analysts and serve engineers, developers and data scientists. Strategically, it must be an open system to integrate with your IT and security operations and have compliant governance controls. Technologically, it must be multi-cloud native to give you the scalability of workloads and freedom of vendor choice. And economically, its pricing must be attractive, transparent and predictable to grow with your business.

Recognizing these requirements, leading global brands in every industry vertical are employing cloud native lakehouse architectures. The architecture combines the advantages of data warehouses with data lakes via managed cloud services. Not all lakehouses are open platforms or multi-cloud capable. Ask those questions if you want to avoid vendor lock in or if you expect to operate in multiple geographies or if you have independent subsidiaries.

select data analytics solutionDefining the business problem is of utmost importance. Based on the business problem, you need to identify what sort of data is required for your task. This is very crucial as it may lead to Garbage in – Garbage out.

Now that you have identified what kind of data is to be collected, the next step is to pinpoint the source of this data. Internal sources, external sources, paid data sources etc. can be leveraged, tool stacks that can be used, starting from data collection to insights building.

Consider the following while selecting your data analytics solution:

  • The data pipeline that you are intending to integrate should be compatible with your other applications. Create an ecosystem.
  • Your data pipeline should be agile, scalable and future-ready.
  • You should get real-time reports rather than post facto insights.
  • Perform a cost-benefit analysis before selecting and deploying any analytics solution.
  • It should not compromise with your data security.
  • The learning curve for the people using the solution should not be steep.
  • Start small, start now.

A lot of times, businesses tend to get deviated with buzz words like ML/AI, deep learning, big data etc. We need to understand that an organization can not be ML/AI compliant overnight. It takes time. It is key to start with basic and small analytics solution, learn from it and keep building on it.


from Help Net Security https://ift.tt/3pf1Eiv

Cybersecurity industry analysis: Another recurring vulnerability we must correct

I have spent my career finding, fixing, discussing, and breaking down software vulnerabilities, one way or another. I know that when it comes to some common security bugs, despite being in our orbit since the 90s, they continue to plague our software and cause major problems, even though the (often simple) fix has been known for almost the same length of time. It truly feels like Groundhog Day, where we as an industry seem to do the same thing over and over and expect a different result.

cybersecurity industry analysis

There’s another little problem, however. We’re not getting realistic advice, nor the fastest solutions, to combat the non-stop onslaught that is the modern threat landscape. Of course, each breach is different in its own way and there are numerous attack vectors that can be exploited in vulnerable software. Feasible generic advice will be limited, but the best practice approach is looking more flawed by the hour.

To this end, I do have to wonder why so much of the commentary and analysis around cybersecurity has omitted solutions that truly address the root cause of so many vulnerabilities: humans. Gartner’s recent Hype Cycle for Application Security report, and Forrester’s The State of Application Security 2021 report – both bibles for security experts that undoubtedly help to shape their program and potential product adoption – are almost entirely tools-focused.

A report by Aberdeen back in 2017 showed just how unruly the average security tech stack had become, with CISOs managing hundreds of products as part of their security strategies; four years later, we’re grappling with more risk, more vulnerabilities, and more additions to growing tech stack beasts.

Security tooling is a must-have, but we need to look wider and restore balance to the people component of security defense.

Automation is the future. Why should we care about the human element of cybersecurity?

Virtually everything in our lives is powered by software, and it’s true that automation is replacing the human elements that were once present in so many industries. It’s a sign of progress in a world digitizing at warp speed, with AI and machine learning hot topics keeping many organizations future-focused.

So, why, then, would a human-focused approach to cybersecurity be anything other than an antiquated solution to a technologically advancing problem? The fact that billions of data records have been stolen in breaches in the past year, including the most recent Facebook breach affecting over half a billion accounts, should indicate that we’re not doing enough (or taking the right approach) to make a serious counter-punch against threat actors.

Cybersecurity tooling is a much-needed component of cyber defense, and tools will always have a place. Analysts have been absolutely on point in recommending the latest tools in a risk mitigation approach for enterprises, and that will not change. However, with code quality (and, by definition, security) difficult to manage at the volume of code production, tools cannot do the job alone. To date, there is no single tool that will:

  • Scan for every vulnerability, in every language:framework
  • Scan at speed
  • Minimize the double-handling caused by false positives and negatives

Tools can be slow, cumbersome, and unwieldy. Above all, however, they only find problems – they don’t fix them, or recommend solutions. The latter requires security experts, who are thin on the ground and overworked, wading through the trash to find treasure in endless penetration testing and scanning results.

The fact is, according to the IBM Cyber Security Intelligence Index Report, human error plays a role in 95% of all successful data breaches. Almost half of those directly relate to software vulnerabilities, many of which could be alleviated if there was stronger adherence to secure coding and awareness in the early stages of the SDLC. However, for this to happen, a sharper and more relevant focus on education for developers – in addition to making it intrinsic to their workflow – is key.

Whether we like it or not, humans are deeply ingrained in the software development process, and cybersecurity is overwhelmingly a human problem. Tools won’t be a catch-all to correct a fundamental flaw in our approach, but they can play a key supporting role in reshaping human solutions.

What if we just built better tools (and lots of them)?

Security tooling is improving all the time. SAST/DAST/IAST tools have come a long way, improving in speed and intelligence, and RASP should be a serious defensive consideration in many application environments. Firewalls, secrets managers, cloud and network security applications: all no-brainers.

Humans can always strive to make better tools, but the innovation is not keeping up with the security and data protection needs of the digital world we live in. Tools are, for the most part, built with robots in mind. They might be there to assist developers and the security team in scanning, monitoring, or protecting code, but interaction is very limited, and very few solutions aim to elevate security awareness or improve core skills that can lead to better security outcomes.

In fact, more than half of enterprises don’t even know if the tools are working for them, nor are they confident that they could avoid a devastating data breach. That’s a very poor sentiment, and in a tools-obsessed industry lacking support for a different approach, tends to solidify the status quo and the problems within.

How can an organization leverage a human-led approach to security?

There is no question that staying ahead of the trends in application security technology is beneficial and can even help prioritize upgrades or consolidations in a bloated tech stack. But to forgo targeting the root cause of vulnerable software – we mere humans – is going to keep us on the losing side of the cybersecurity battlefront.

If we want to get serious about decreasing the number of code-level security vulnerabilities, then developers need to be given the foundations to succeed in sharing responsibility for security. They need relevant, hands-on education and on-the-job upskilling, and functional tooling that doesn’t disrupt their workflow, or make security a chore to develop. Ideally, some tools would be developer-centric, built with their user experience front-of-mind.

To this day, no formal security certification program exists for developers, but every company can benefit from benchmarking and growing secure coding skills, killing common vulnerabilities early and often, and before that big tech stack has to lurch into action and slow everything down.

A team of security-aware developers is a hidden treasure for any organization, but like anything worth having, it will take time and effort to implement an effective dream team. Winning developers over to care about security and view secure coding as a foundation of code quality, takes an organization-wide commitment to put security first. And when entire teams are switched on to the positive impact they can play in eliminating common vulnerabilities as code is written, there isn’t a tool on Earth that can compete.


from Help Net Security https://ift.tt/2ReUGh2

Helping security teams respond to gaps in security and compliance programs with Qualys CSAM

Unlike traditional inventory tools that focus solely on visibility or rely on third-party solutions to collect security data, Qualys CyberSecurity Asset Management (CSAM) is an all-in-one solution.

In this interview with Help Net Security, Edward Rossi, VP, Product Management, Asset Inventory and Discovery at Qualys, talks about how the solution enables security professionals to see the entire picture of their assets – from inventory to detection to response.

Qualys CSAM

Many organizations can’t secure their hybrid IT environments since they don’t know what is in their inventory. What makes visibility into security context a gold mine for security teams?

When we spoke with our customers, it became clear that organizations need a comprehensive security view of their IT asset infrastructure, and they are struggling to get it. While traditional IT teams and inventory tools provide an IT view of inventory, software support, and licensing, security teams are looking for the security context of assets such as assets that are not running security tools, detection of unauthorized software, internet visibility, and more.

These teams need to manage the risk posture of the assets rather than only inventorying the assets. In fact, an increasing number of mandates like FedRAMP, PCI require organizations to report asset inventory data correlated with security health posture of the assets.

Security tools like EDR help secure assets, but do not let security teams know which critical assets are not running EDR, or if databases are visible from the internet? All security teams have defined authorized and unauthorized software policies. Yet, operationalizing these policies and alerting for deviations would help security teams pinpoint the issue immediately instead of waiting on inventory data from IT teams or manually correlating the data.

Asset inventory data specifically managed with security context helps security teams continuously assess asset risk, detect at-risk assets, and prioritize an often overwhelming number of security issues so they can respond quickly.

But doesn’t this mean that overburdened security teams just have to do more?

They would if they had to do it themselves. The primary goal of our new solution Qualys CyberSecurity Asset Management (CSAM) is to narrow down the IT inventory to focus on assets and applications that require the most urgent attention.

Security teams don’t just want a list of static issues and adding security context on an ad hoc basis or manually on top of IT asset inventory doesn’t work. They need to be monitoring changes to the security context of their assets, so they can know when the new assets with specific characteristics or risk profiles are introduced or when the risk of existing assets changes.

With limited resources on security teams, automated detection and alerting tools are required to achieve the scale and scope of managing small- and medium-sized environments, let alone enterprise-scale infrastructure. Simple policies like “No databases should run on webservers” can become a complex challenge to implement.

Qualys recently unveiled CSAM. What are its main features and what makes it unique?

First, Global AssetView, our free IT Inventory offering, automatically discovers and classifies all IT assets including software, on-prem devices and applications, mobile, clouds, containers, and enterprise IoT devices using both agent and agentless methods. It works in conjunction with the Qualys Cloud Platform and Qualys sensors (scanners, cloud connectors, container sensors, cloud agents, passive sensors and APIs), ensuring that you have a comprehensive view of your entire IT asset inventory.

CyberSecurity Asset Management builds on our free Global AssetView app and moves the needle beyond inventory by adding security context and response. It is asset management reimagined for security teams, focused on identifying all systems comprehensively, detecting at-risk assets, and mitigating with appropriate actions.

The app fills the gap between traditional IT inventory and the core security functions by overlaying key business and asset criticality data, establishing unauthorized and authorized software lists, applying current and upcoming EOL/EOS data, providing an outside-in view of the organization’s internet-facing assets, highlighting security endpoint blind spots, monitoring the result with policy-based alerts, and facilitating appropriate response with software uninstall. It represents a security foundation on which organizations can deploy and build before easily moving to vulnerability management, endpoint detection and policy compliance using our single agent.

CSAM delivers asset and risk detection from a single platform, providing comprehensive inventory from multiple native sensors and third-party sources with real-time grouping and classification. It also includes policy-based detection of an assets’ security health by applying business criticality and risk context, detecting security tool gaps and responding with alerts or unauthorized software removal, thus reducing the ‘threat debt’.

Other cybersecurity point solutions rely solely on third-party inventory tools or siloed technologies to collect data. Qualys not only provides a multi-pronged hybrid inventory capability with an in-context security view but uses that same infrastructure to deliver Endpoint Detection & Response, Vulnerability & Patch Management, Policy Compliance and more.

Qualys CSAM

How does the new solution help with enterprise IoT?

The incredible proliferation of IoT devices has vastly expanded the enterprise attack surface, but discovering, managing, and protecting those devices by traditional methods is not scalable. These devices lack built-in security controls. They can’t easily receive software updates, and can’t host agents, which leaves them unseen and unmonitored by traditional enterprise cybersecurity products.

There is also a lack of visibility into both known and rogue IoT devices connecting to the network. Not all IoT devices were designed with security in mind, as they often contain clear text passwords, weak self-signed certificate and are implemented without encrypted communications.

Qualys’ ability to track and identify IoT devices is crucial to ensuring overall visibility. CyberSecurity Asset Management identifies enterprise IoT devices by leveraging the Global AssetView passive sensor to listen to network traffic and to identify all IP-connected devices in real time. CSAM dissects multiple protocols to fingerprint and uniquely identifies thousands of IoT devices

Qualys is also significantly extending its enterprise IoT fingerprinting library and profiling capability for tens of thousands of additional devices across key categories prevalent within our customer networks. These devices include VoIP phones, building automation devices, access control and badge readers, security cameras, connected audio and media devices, IoT gateways and access points, network printers, smartphones and tablets.

Qualys CSAM allows teams to focus security prioritization efforts on high-importance and high-risk assets using Asset Criticality. What does that include?

Asset criticality, defined by the user’s unique business environment, is a key tool that helps customers focus their security prioritization efforts on high-importance assets. It is a user-defined measure of asset function, environment, and service. With CSAM, users pulling data from their CMDB will automatically assign the asset criticality score to a tag and the corresponding asset. Assigning asset criticality to Qualys tags, users can prioritize based on a wide range of factors including assets that are cloud-based, running databases, production, as well as location and function based, such as those at headquarters or belonging to a key business service.

And with multiple tags linked to a given asset, the highest criticality value is identified and assigned as a searchable attribute to the asset itself. This provides the user with a flexible, dynamic and scalable method of establishing and automatically updating an asset’s criticality. Once defined, this measure can be used in conjunction with other context data to focus on assets with the greatest potential to impact the business.

How can Qualys CSAM users take advantage of the identification of at-risk assets?

CyberSecurity Asset Management applies security context to help identify at-risk assets. For example, it allows for the management of authorized and unauthorized software lists, also known as whitelists and blacklists. It also understands which assets are missing required security and monitoring tools or which assets are running software they shouldn’t be running.

EOL and EOS software is also identified as it represents a substantial risk as vendors are no longer supporting these versions. Additionally, Qualys’ external scanning and integration with third-party sources like Shodan.io gives an outside-in view based on the IPs owned by your organization, so you can see which assets in your inventory are visible from the internet.

When used in combination, the detection features of CSAM allow you to answer questions like, “Do I have databases running on internet-exposed systems used by the accounting department at the headquarters office?” CSAM automatically alerts on configured policies and can even uninstall unauthorized or EOL, EOS software directly from the CSAM application.

What types of reports are available in Qualys CSAM?

Out-of-the-box security health reports are available, including FedRAMP and PCI-DSS, providing a high-level view of a set of assets, e.g. for an individual office in an organization, as shown below.

Qualys CSAM

In addition, interactive reports allow you to flexibly drill down into any set of assets for insights into specific asset risk. The strength of the reporting rests on the combination of comprehensive asset inventory, the security context that CSAM applies to the inventory, including from Qualys’ CMDB integration, and the flexible filtering and drilldown features in the reporting itself, supported by normalization and categorization of the data.

To learn more about Qualys CyberSecurity Asset Management, please join us for our AssetView Live event on June 2.


from Help Net Security https://ift.tt/3wMghMW

EUCC receives first EU cybersecurity certification scheme

In July 2019, the EUCC was the first candidate cybersecurity certification scheme request received by the EU Agency for Cybersecurity (ENISA) under the Cybersecurity Act.

EU cybersecurity certification scheme

This scheme aims to serve as a successor to the currently existing schemes operating under the SOGIS MRA (Senior Officials Group Information Systems Security Mutual Recognition Agreement).

It covers the certification of ICT products, using the Common Criteria ISO/IEC 15408 and is the foundation of a European Cybersecurity certification framework.

The latter will consist of several schemes that it is expected to gradually increase trust in ICT products, services and processes certified under these schemes and reduce the costs within the Digital Single Market.

This scheme was originally published on 1 July 2020 and it was put for consultation which allowed certification actors and interested parties to provide their feedback through a dedicated survey.

Key points of the public consultation outcome

  • Confirms the intent of certification stakeholders to use the scheme in the internal market, when it is made available
  • stakeholders encourage ENISA to further develop guidance to support the implementation and execution of the scheme
  • stakeholders indicated some elements of the scheme that needed to be adjusted or fixed, such as conditions or timelines for the maintenance of certificates, the monitoring and handling of non-compliances or vulnerabilities.

Key recommendations

Further to the candidate scheme ENISA has supported the EU cybersecurity certification framework to:

  • Develop a communications plan targeting consumers to support the implementation of the EUCC scheme and ensure they are well informed in what cybersecurity certification of ICT products entails
  • Ease the participation of interested EU Member States newcomers to cybersecurity certification to participate to the EUCC scheme by providing a dedicated training programme
  • Establish a transition project in order to provide and ensure the best conditions for a smooth transfer from the current national SOG-IS activities to the current EUCC.

The Agency has currently transmitted the candidate EUCC scheme v.1.1.1 to the Commission in line with the provisions of Article 49 (6, 7) of Regulation (EU) 2019/881 (Cybersecurity Act). The Commission will initiate a Commission Implementing Regulation that may be adopted.


from Help Net Security https://ift.tt/3yPJUyH

Endpoint complexities leaving sensitive data at risk

Absolute Software announced key findings from its report which shines a light on key trends affecting enterprise data and device security, and underscores the dangers of compromised security controls in expanding an already wide attack surface for today’s enterprises.

endpoint complexities

Researchers estimate that the number of ransomware attacks grew by more than 150% in 2020, fueled by the global pandemic and the massive disruption to IT and security operations.

According to The Coveware Quarterly Ransomware Report, the most common software vulnerabilities exploited by ransomware attackers in Q1 (Jan – Mar) 2021 involved VPNs. It goes on state that “the cyber extortion economic supply chain demonstrated how a vulnerability in widely used VPN appliances can be identified, exploited and monetized by ransomware affiliates.”

With increasing endpoint complexities comes increased risk

The findings reveal that the need to support and secure remote workforces only exacerbated the existing complexities found in today’s endpoint environments – and with increasing complexity comes the increased risk of friction, failure, and noncompliance.

One in four devices analyzed had critical security controls — such as encryption, antivirus, or VPN — considered to be unhealthy, or not working effectively, at any given time. If left unaddressed, almost any application deployed on the endpoint carries the potential of becoming an attack vector.

“The trends in this year’s report — unaddressed vulnerabilities, unprotected data, and failing security controls – are clear indicators that it is time for organizations to put rigor around ensuring the endpoint security tools they’ve invested in are effectively protecting their valuable, and vulnerable, corporate devices and data,” said Christy Wyatt, President and CEO of Absolute.

“And, the findings underscore the critical need for resilient endpoints and applications in the evolving ‘work from anywhere’ era. The ability to identify and mitigate risk is dependent on having the ability to monitor the state of every device and application, identify where things might be fragile or falling down, and autonomously heal them when needed.”

Other notable insights

  • Endpoint complexity and redundancy continue to plague enterprises: The average number of security controls has increased to more than 11 per enterprise device, with the majority of devices containing multiple controls with the same function. 60% of enterprise devices analyzed had two or more encryption applications installed, while 52% had three or more endpoint management applications installed.
  • Sensitive data remains unprotected and at risk: 73% of enterprise devices analyzed contained sensitive data, such as Protected Health Information (PHI) or Personally Identifiable Information (PII). Compounding the risk of exposure, 23% of devices with high levels of sensitive data also reported unhealthy encryption controls.
  • Patching delays leave critical vulnerabilities unaddressed: The average Windows 10 enterprise device was found to be 80 days behind in applying the latest available OS patches. More than 40% of Windows 10 enterprise devices were running version 1909, which is associated with over 1,000 known vulnerabilities.

from Help Net Security https://ift.tt/3vDtNlS

The human cost of understaffed SOCs

SOC and IT security teams are suffering from high levels of stress outside of the working day – with alert overload a prime culprit, a Trend Micro study reveals.

understaffed SOCs

According to the study, which polled 2,303 IT security and SOC decision makers across companies of all sizes and verticals, 70% of respondents say their home lives are being emotionally impacted by their work managing IT threat alerts.

This comes as 51% feel their team is being overwhelmed by the volume of alerts and 55% admit that they aren’t entirely confident in their ability to prioritize and respond to them. It’s no wonder therefore that teams are spending as much as 27% of their time dealing with false positives.

SOCs and IT teams heavily understaffed

These finding are corroborated by a recent Forrester study, which found that “security teams are heavily understaffed when it comes to incident response, even as they face more attacks. Security operations centers (SOCs) need a more-effective method of detection and response; thus, XDR takes a dramatically different approach to other tools on the market today.”

Outside of work, the high volumes of alerts leave many SOC managers unable to switch off or relax, and irritable with friends and family. Inside work, they cause individuals to turn off alerts (43% do so occasionally or frequently), walk away from their computer (43%), hope another team member will step in (50%), or ignore what is coming in entirely (40%).

“We’re used to cybersecurity being described in terms of people, process and technology,” said Dr. Victoria Baines, Cybersecurity Researcher and Author.

“All too often, though, people are portrayed as a vulnerability rather than an asset, and technical defenses are prioritized over human resilience. It’s high time we renewed our investment in our human security assets. That means looking after our colleagues and teams, and ensuring they have tools that allow them to focus on what humans do best.”

Pressure sometimes comes at an enormous personal cost

With a staggering 74% of respondents already dealing with a breach or expecting one within the year, and the estimated average cost per breach $235,000, the consequences of such actions could be disastrous.

“SOC team members play a crucial role on the cyber frontline, managing and responding to threat alerts to keep their organizations safe from potentially catastrophic breaches. But as this research shows, that pressure sometimes comes at an enormous personal cost,” said Bharat Mistry, technical director for Trend Micro.

“To avoid losing their best people to burnout, organizations must look to more sophisticated threat detection and response platforms that can intelligently correlate and prioritize alerts. This will not only improve overall protection but also enhance analyst productivity and job satisfaction levels.”


from Help Net Security https://ift.tt/3vDYCXF

Security leaders more concerned about legal settlements than regulatory fines

An overwhelming 90% of security leaders are concerned about group legal settlements following a serious data breach, compared to 85% who are worried about regulatory fines, Egress reveals.

security leaders legal settlements

Launched to commemorate three years of GDPR, the research also found that 47% of consumers would likely join a class-action lawsuit against an organization that had leaked their data, proving security leaders’ fears to be accurate.

In response, 91% of security leaders are turning to cyber insurance to protect themselves from financial exposure by either taking out new policies or increasing their cover because of GDPR.

The survey, independently conducted by OnePoll on behalf of Egress, interviewed 250 security leaders and DPOs in the UK and 2,000 UK consumers.

Security leaders concerned about data breach legal settlements

  • 90% of security leaders are concerned about class action by data subjects in the event of a serious data breach, whereas 85% are concerned about regulatory fines
  • 47% of UK consumers say they’d join a class-action lawsuit against an organization that had leaked their data
  • 91% of security leaders reported taking out cyber insurance, or upgrading their policy, as a result of GDPR
  • 67% of UK consumers are aware that they have the right to take legal action against an organization that suffers a breach that exposes their personal data

Egress CEO Tony Pepper comments: “The financial cost of data breach has always driven discussion around GDPR – and initially, it was thought hefty regulatory fines would do the most damage. But the widely unforeseen consequences of class action lawsuits and independent litigation are now dominating conversation. Organizations can challenge the ICO’s intention to fine to reduce the price tag, and over the last year, the ICO has shown leniency towards pandemic-hit businesses, such as British Airways, letting them off with greatly reduced fines that have been seen by many as merely a slap on the wrist.

“With data subjects highly aware of their rights and lawsuits potentially becoming ‘opt-out’ for those affected in future, security leaders are right to be nervous about the financial impacts of litigation.”

Lisa Forte, Partner at Red Goat Cyber Security, comments: “The greatest financial risk post breach no longer sits with the regulatory fines that could be issued. Lawsuits are now common place and could equal the writing of a blank cheque if your data is compromised.

Companies will need deeper pockets to cover the lawsuits

European countries haven’t typically subscribed to a litigious way of regulating the behaviour of companies. That is now changing and without explicit Government intervention companies will need to accept they need deeper pockets to cover the lawsuit gold rush we are starting to see.

The recent Google case that currently sits with the UK Supreme Court could make group claims “opt out” instead of “opt in”. That will inevitably mean that every single customer affected would be entered into the group action. That should be a huge worry for companies.

Companies need to really prioritise preventative measures both technical and human and have a tested incident plan in place.”

Eric Bedell, Chief Privacy Officer, Franklin Templeton, comments: “When enforced back in 2018, GDPR set the tone of how use of personal data should be regulated. When regulatory fines have been in the news (and often used as a trigger for GDPR implementation), there is a lesser-known aspect: the right to take legal action against an organization, not only for data breaches, but also for failure to erase personal data, to rectify, to respond to Data Subject Access Requests (DSARs) or to provide portable information.

If in the United States, under CCPA, we have seen many actions, in Europe this is not (yet) widely used. However, I predict that this will grow as this right to take legal action becomes more popular – especially knowing that the ICO publishes a web page to provide guidance for data subjects taking such action. As a firm this is a risk you want to consider, maybe more than regulatory fines, in my view.”

Cyber insurance won’t help recover reputational damage

Edina Csics, GDPR & Data Protection Consultant at GIS-Consulting, comments: “While cyber insurance might cover the financial damage caused by a data breach, it won’t help recover any reputational damage done. I hope that the 91% of respondents that have changed their cyber-insurance policies in response to GDPR have also considered doing the right thing by putting more serious measures in place than click-through employee security training and remediating their loosely implemented security technologies in addition to, and not instead of, taking out cyber-insurance. Data breaches do occur, and it’s a matter of when and not if, but in many cases these could be prevented.

But whatever their motivation, be it fearing collective lawsuits or regulatory fines, in taking steps to avoid financial damage, their actions may play in favor of consumers and the protection of their data.

Having said that, looking at the past activity of the ICO and its enforcement habits, I am inclined to understand why security leaders are more worried about the actions of those who are directly impacted – the data subjects whose personal data is subject to their not-quite watertight security measures – and those data protection activists that have an even higher drive to prove that there is more organizations can do to guard personal data.”


from Help Net Security https://ift.tt/2Tnxqhk

Group-IB opens MEA Threat Intelligence & Research Center in Dubai

Group-IB has officially announced the opening of its Middle East & Africa Threat Intelligence & Research Center in Dubai. The grand opening, held at the Habtoor Palace Dubai, was attended by representatives of the local financial organizations, government institutions, and the guest of honor, Mr. Craig Jones, INTERPOL Cybercrime director.

Group-IB’s leadership views the opening of its MEA Threat Intelligence & Research Center as a critical milestone toward achieving the strategic goal of building the first ever decentralized global cybersecurity company with fully operational R&D centers in the key financial hubs.

Group-IB’s office would not only operate just as a sales hub but also as a full-scale regional HQ, offering all core technological competencies and bringing with it the top skills that are found across its global HQ in Singapore and other offices.

The new Center, located at the Dubai Internet City, will accommodate 18 employees from key Group-IB units: hi-tech crime investigations, Digital Forensics and Incident Response (DFIR) lab, Threat Intelligence, security assessment, Computer Emergency Response Team (CERT-GIB), Threat and Fraud Hunting teams, Digital Risk Protection department, and other major divisions.

“The threat of cybercrime is global, with regions being impacted differently,” noted Mr. Craig Jones, Director of Cybercrime, INTERPOL. “By understanding first-hand how the threats are evolving and what impact and harm they are causing in the region, I know that together we can mitigate those far-reaching threats and reduce harm more effectively. Encompassing a wide range of expertise, experience and skills, this HQ will play a pivotal role for Group IB’s research into the regional threat landscape and on-the-ground support for their customers and partners.

“INTERPOL’s Global Cybercrime Programme looks forward to further strengthening our partnership with Group-IB and increasing operational activities against cybercrime in the region in collaboration with this office.”

Dubai is one of the regional strongholds for the coordination of cross-border efforts against cybercrime and research into threat actors and their techniques. The brand-new Threat Intelligence & Research Center enables the local community to leverage Group-IB’s in-depth knowledge of criminal schemes and close collaboration with international law enforcement and cyber police forces worldwide.

The company’s battle-tested experts carried out more than 1,200 successful investigations over 18 years around the world, enriching the Group-IB’s technology ecosystem with first-hand understanding of intrusion tactics used in the most sophisticated cyberattacks.

Knowledge transfer and hiring of local talents are other key elements of Group-IB’s strategy. The company plans to have more than 50 team members in the UAE within the next 18 months. Leveraging its cyber education arm and successful track record with universities worldwide, the Dubai team will be tasked with investing in local talents by collaborating closely with the UAE higher education institutions.

The initial hiring focus will be on digital forensics experts, investigators, and cyber threat intelligence and attribution specialists who are expected to join Group-IB’s MEA Threat Intelligence & Research Center.

“Zero tolerance to cybercriminals has brought us to the forefront of the global fight against online crime,” remarked Ilya Sachkov, Group-IB CEO and founder, commenting on the office opening. “Dubai is a perfect place to carry on this mission together with local institutions and international law enforcement. As part of our contribution to building a vibrant cybersecurity ecosystem in the UAE, we plan to develop world-class research, monitoring, incident detection and response capabilities here in Dubai and adapt them to the needs of the market,” he added.

Group-IB’s newly inaugurated Threat Intelligence & Research Center will serve the company’s existing customer base, which includes over 30 clients in the MEA region within the banking, government, insurance, and energy sectors. More local businesses will now be able to benefit from Group-IB’s distinctive organizational structure and technology ecosystem that includes equally strong product and service arms.

According to the annual “Hi-Tech Crime Trends report 2020/2021,” at least 18 state-sponsored threat actors, including APT33, MuddyWater, and APT41 targeted the MEA region alone. The Middle East has been a testing ground to pilot tools related to attacks on the energy sector and ICT from the times of Stuxnet up until now. In this context, robust monitoring and response for IT and industrial OT networks play a pivotal role in protecting critical assets of smart cities, CII, public and private companies in the UAE.

In his keynote speech, Group-IB CTO Dmitry Volkov highlighted other underlying regional cyber trends such as ransomware attacks and sale of access to corporate networks. According to Group-IB’s data, at least 12 victims suffered publicly known ransomware attacks in the Middle East in 2020, with most of them having taken place in the UAE.

To that end, Group-IB brings to the region a product and service portfolio that includes a first-ever all in one solution Threat Hunting Framework for the protection of both IT and OT segments. Another innovation that becomes more accessible to local customers is Group-IB’s Threat Intelligence & Attribution (TI&A), a system designed to create and customize a cyber threat map for a specific company.

Every analyst who uses TI&A now gets access to the largest collection of dark web data, an advanced hacker group profiling model, and a fully automated graph analysis tool that helps correlate data and attribute threats to specific criminal groups in seconds. Group-IB’s TI&A has been deemed compliant with industry recommendations for gathering cyber threat intelligence data, issued by the United States Department of Justice for cybersecurity companies, by a Big Four accounting company.

Ashraf Koheil, an industry heavyweight, is the most recent addition to the Group-IB team in the UAE. Mr. Koheil brings over 25 years of entrepreneurial experience in IT&ICT security. He will lead Group-IB’s regional business development team.

“The UAE is one of the most progressive and demanding markets striving for continuous improvement in all aspects including government, services, banking with cybersecurity at the forefront,” comments Mr. Koheil. “This makes Group-IB a perfect fit as our mission is all about eliminating all facets of cybercrime, be it financially motivated, nation state activity, or social engineering scams. We expect a lot of our growth to come from the true partner friendly ecosystem that we’ve been creating in the region.

“We are also planning joint research & development with key government institutions in the financial sector and law enforcement to bring more localized solutions and develop local expertise.” Mr. Koheil noted that Group-IB’s investment plan in the MEA region also includes a stronghold in the Kingdom of Saudi Arabia, where the company has already built its MSSP hub.


from Help Net Security https://ift.tt/34woqsN

Shenoy Sandeep joins Cyble as Regional Director of META

Cyble announced that regional cybersecurity expert Shenoy Sandeep has joined Cyble as the Regional Director – Middle East, Turkey, and Africa (META).

This news follows Cyble’s recent announcement of a USD 4 million seed financing led by Blackbird Ventures and Spider Capital, with participation from Xoogler Ventures, Picus Capital, and Cathexis Ventures. Shenoy brings over 13+ years’ experience in the cybersecurity, having advised some of the most critical organizations in the region.

A well-known figure in the local cybersecurity community, Shenoy will be responsible for driving growth across the META region by highlighting the true value Cyble provides in the cyber threat intelligence monitoring space.

Mandar Patil, VP – International Market and Customer Success at Cyble said, “Shenoy is a reputed regional cybersecurity expert, and I am excited to be working with him. With his ability to understand the cybersecurity threat landscape across the META region, we intend to highlight the sheer visibility that Cyble’s AI-powered SaaS platform, Cyble Vision, can provide in terms of monitoring the attack surface across the deepweb, darkweb, and cybercrime marketplaces.

Cyble has seen record growth since its inception, and the company is on the journey towards dominance in the cyber threat intelligence space. Shenoy will engage with the local network of end-users, partners, and existing alliances to spearhead our regional growth activities.”

Recently, Vijay Sethi, Digital Transformation and Sustainability Evangelist, has joined the Advisory Board of Cyble. As part of its continued strategic hiring, Cyble has also appointed cybersecurity veteran Maxim Mitrokhin, ex-MD Kaspersky Lab APAC and former GM – APAC for Acronis Asia Pte Ltd. as the Regional Sales Director (SEA, GRC, Korea and Japan) & Channel – APAC.

In addition, former General Dynamics Executive James Thornton Joined hands with Cyble as the Regional Director Sales & Customer Success – North America. As Cyble scales new growth trajectories, the addition of cybersecurity experts to Cyble’s leadership team is a critical step in reinforcing the company’s sales blueprint and growth strategies globally.

Commenting on his appointment, Shenoy said, “I am impressed with what Cyble has achieved in its young startup journey, and I am proud and excited to be joining hands with the company. Cyble has the industry’s most sought-after threat intelligence research team. With a focus on a strategic machine and human analyst-driven threat monitoring offering, we ensure that our intelligence is accurate, risk-driven, and validated before notifying our customers. Aided by a global pool of threat intelligence sensors that now cover the region, the team has been able to receive insights into targeted activity and proactively inform end-users way before cybercriminals cause reputational damage. Having the largest visibility into the darkweb space, I am sure we will add value to our customers in the META region.”

“We are aware of Shenoy’s consultative approach in responding to customers’ cybersecurity challenges in the META region. With his extensive experience and reputation as a trusted advisor amongst the local CXO community, Shenoy’s appointment will truly add value to Cyble. The META region compromising of the GCC countries is strategic for us at Cyble, and proactively engaging with customer requirements is our top priority. With a UAE-based entity to manage regional operations, onboarding local talent in the works, and an existing angel investment from Dubai-based Venture capital firm VentureSouq, Cyble’s presence in the region will be impactful where everyone benefits within the ecosystem,” said Beenu Arora, Founder & CEO of Cyble.


from Help Net Security https://ift.tt/3c75u7Q

The Best Gardening Communities to Find on Reddit

Photo: Mariia Boiko (Shutterstock)

As helpful as (we’d like to think) reading articles about plants and gardening can be, it’s also nice to have an online community that can answer questions, give advice, and provide some horticultural inspiration. Fortunately, there are plenty of subreddits—some with hundreds of thousands of members from all over the world—that offer a deep-dive into all-things gardening.

In an article on BobVila.com, Alexa Erickson dug up the best plant-related subreddits that are both useful and entertaining. Here are a few to check out.

r/DramaticHouseplants

Love before-and-after transformations? Then this subreddit is for you. Members of the group discuss what went wrong with their plants, and then what they did to save them—along with accompanying photos or time-lapse videos.

r/plants

This plant-based subreddit has been around for more than a decade, and bills itself as a “place to share pictures and discuss growing, maintaining, and propagating houseplants and outdoor decorative plants.”

G/O Media may get a commission

r/houseplants

Want to focus solely on plants that live in your home? Then this subreddit—about to celebrate its tenth anniversary—is for you.

r/IndoorGarden

Didn’t find what you needed in the r/houseplants subreddit? Try this one, which focuses on indoor gardens—including ones that grow vegetables and herbs.

r/gardening

Since March 2008, this subreddit has been used to discuss gardening, plants, and agriculture. There are 3.6 million members, and is a “place for the best guides, pictures, and discussions of all things related to plants and their care.”

r/whatsthisplant

Not sure what kind of plant you’re dealing with? The 552,000 members of this subreddit can help. According to the page description: “Visitors are encouraged to submit requests as well as help out with identification.”

r/plantclinic

Is your plant sick? Is it no longer responding to its usual care routine? For a diagnosis, you’ll want to consult the 355,000 members of this subreddit.


from Lifehacker https://ift.tt/34AlvPz

How to Properly Fold, Display, and Dispose of the American Flag

Photo: Christopher bender (Shutterstock)

With plenty of American flags proudly displayed today for Memorial Day, it’s a good time to take a look at how to properly display, store, and dispose of Old Glory. For all the rules and regulations, we’ll turn to the Veterans of Foreign Wars (VFW) for guidance. Here’s what to know.

How to display the American flag

It’s not simply a matter of hanging the flag from a window, and calling it a day. Here are the rules for displaying it:

  • On same staff: U.S. flag at peak, above any other flag.
  • Grouped: U.S. flag goes to its own right. Flags of other nations are flown at same height.
  • On speaker’s platform: When displayed with a speaker’s platform, it must be above and behind the speaker. If mounted on a staff it is on the speaker’s right.
  • Decoration: Never use the flag for decoration. Use bunting with the blue on top, then white, then red.
  • Half staff: On special days, the flag may be flown at half-staff. On Memorial Day it is flown at half-staff until noon and then raised.

Other rules for displaying and transferring the flag

  • Do not let the flag touch the ground.
  • Do not fly flag upside down unless there is an emergency.
  • Do not carry the flag flat, or carry things in it.
  • Do not use the flag as clothing.
  • Do not store the flag where it can get dirty.
  • Do not use it as a cover.
  • Do not fasten it or tie it back. Always allow it to fall free.
  • Do not draw on, or otherwise mark the flag.

How to properly fold an American flag

The VFW website has pictures, if you’re more of a visual learner, but their instructions are as follows:

  1. Fold the lower striped section of the flag over the blue field.
  2. Folded edge is then folded over to meet the open edge.
  3. A triangular fold is then started by bringing the striped corner of the folded edge to the open edge.
  4. Outer point is then turned inward parallel with the open edge to form a second triangle.
  5. Triangular folding is continued until the entire length of the flag is folded in the triangular shape with only the blue field visible.

G/O Media may get a commission

How to dispose of an American flag

If you’re unsure about how or when to dispose of a flag, you can always contact your local VFW Post for more information. But here are the basic instructions:

  1. The flag should be folded in its customary manner.
  2. It is important that the fire be fairly large and of sufficient intensity to ensure complete burning of the flag.
  3. Place the flag on the fire.
  4. The individual(s) can come to attention, salute the flag, recite the Pledge of Allegiance and have a brief period of silent reflection.
  5. After the flag is completely consumed, the fire should then be safely extinguished and the ashes buried.
  1. Please make sure you are conforming to local/state fire codes or ordinances.

from Lifehacker https://ift.tt/3uzHVLl

Plant These Quick-Sprouting Seeds If You Want a Garden as Fast as Possible

Photo: MarinaGreen (Shutterstock)

When starting a garden, you have two choices: planting seeds or seedlings (and other transplants). Those looking to have a close-to-instant garden will likely want to begin with already-sprouted plants. But if you decide to go the seed route, you may want to pick seeds that will produce at least something green as soon as possible.

In an article for Well + Good, writer Francesca Krempa spoke with Allison Vallin, an organic gardener from Maine, to find out which seeds sprout the fastest. Here’s what to know.

Do some seeds really grow faster than others?

If we learned one thing from those science experiments in school where we planted seeds in a paper cup, it’s that it takes a while before you see anything green. While Vallin says that it’s true that some seeds sprout earlier than others, that doesn’t mean they’ll all produce colorful blooms right away.

“Just because a seed germinates fast doesn’t mean that it blooms fast,” she tells Well + Good. “For example, flowers like violas and petunias take longer for their seeds to germinate than zinnias and cosmos, but they will produce their first flowers much earlier than the later.”

5 types of seeds that sprout (relatively) quickly

When it comes to flowers, Vallin has five recommendations for seeds that will sprout faster than others:

Sunflowers

Don’t worry if you don’t have a large plot of land: not all sunflowers are huge. Vallin recommends planting seeds for dwarf sunflowers (which produce buds between 6-14 inches) in a self-draining pot or garden bed.

G/O Media may get a commission

Nasturiums

Eventually, nasturium seeds will grow into bushes that are red, orange, and yellow in color. “Nasturiums are drought-tolerant, so no need to overwater,” Vallin says. “Simply sow in regular soil and plant the seeds 1/2-inch deep and 10-feet apart. They prefer full sun and well-draining soil.”

Calendulas

After planting calendulas seeds, Vallin recommends cutting off any dying blooms to make room for healthy growth. “The plant actively drops its seed, so you can expect to see ‘volunteers’ pop-up next spring,” she adds.

Phlox

Eventually these seeds will produce purple, white, blue, and pink flowers. They come in perennial and annual varieties, and while the perennials might be less work in the long run, if you’re going for sprouting speed, Vallin recommends the annuals if you’re looking for speed.

Marigolds

Not only are marigolds great companion plants, they also sprout quickly.


from Lifehacker https://ift.tt/2TpU2hb

Sunday, May 30, 2021

Rethinking SIEM requires rethinking visibility

Security professionals now generally recognize that siloed security tools and systems have undercut efforts to find active attacks more quickly and efficiently.

rethinking SIEM

Information security began decades ago with strategies of taking a layered approach and even relying on a heterogeneous mix of vendors. This meant that desktop or endpoint solutions were separate and from different manufacturers than those for gateway or cloud. While the underlying tenets of not relying on a single vendor and taking advantage of best-of-breed expertise for each system or tool is still valid, it has become obvious that data needs to be combined to understand the complete attack surface and progression of the kill chain.

SIEM was created over fifteen years ago to integrate security data for providing real-time analysis of security alerts generated by applications and network hardware. Admittedly, there was too much reliance on log data and not a complete enough representation from all parts of the attack surface or assets being protected, but SIEMs have provided significant value. Still, they have not solved the escalating problem of attacks and breaches and the problem of far too many false positives that can result in inefficient SOC operations and severe penetration loopholes.

Other silo security solutions have come to market to address these deficiencies, including Extended Detection and Response (XDR), Network Traffic Analysis (NTA) and User and Entity Behavior Analysis (UEBA). At the same time, companies are rethinking the SIEM to make it more effective. While each of these represents progress, each is contingent on getting real-time or near-real-time data from across the entire organization. The principle is similar to one of the early computing maxims of garbage in, garbage out. These systems can only be as good as the data they ingest.

Visibility is the key, but visibility is not one-dimensional consideration; there are multiple aspects to consider. The first is breadth or coverage. It’s important to be able to see the signs of any East-West and North-South activity, especially reconnaissance or lateral movement involved in an active attack – it’s the full-dimensional view that is required to expose the entire range of the physical and virtual attack surface. Of course, spotting command and control communication and exfiltration represents advancement of the attack and can be cleverly disguised and difficult to spot. Exfiltration is typically far into the attack process and may be at a point too late to mitigate or stop the theft or damage.

Second, signs of reconnaissance and lateral movement often must come from multiple sources to show progression and enhance the accuracy and speed of the findings. Data will likely come from multiple tools and directly from one or more spots in the network or extended network. For these purposes, the traffic is usually taken out-of-band through a SPAN or RSPAN ports, or network TAP, or a virtual TAP – each option can be selected based on infrastructure architecture.

Third, data context can be critical for correlations and to provide improved insights and efficient “polarized” traffic analysis. Sometimes data delivered through a vendor’s API may lack details necessary to boost fidelity through a better understanding of context. Any historical details may also be valuable to help increase the accuracy of an alert against the baseline behavior that was defined as normal.

Sometimes full packet data is necessary, but often header information or extracted meta data is sufficient. Often decryption of encrypted traffic is essential, but ideally this can be done according to policy to avoid compliance issues, expose data to insiders or produce some kind of liability. Again, having a variety of options is important, as sometimes header information or an extraction to meta data is sufficient.

A fourth aspect is speed and capacity. Getting data in real time or near-real-time is important. Batch uploading of data from solutions may be too slow to stop an attack at the first opportunity. One value of integrating and correlating data within a SIEM or alternative processing center is that small signals or data that alone may seem inconsequential can be compounded to provide better insight and higher accuracy. Something that might have been overlooked does not have to slip through. This requires having all relevant data available quickly and in the same time frame. And, the capacity optimization is necessary for data traffic that can spike above the threshold of computing processing and network interface speeds. These spikes can result in the SIEM or other tools inefficiency for inspection of the information and correlate to the entire security chain reaction.

Flexibility is also important, as the network is always changing. Every day likely sees new users, devices, applications, assets and attack vectors or threats. Having a way to get the exact visibility needed without significant rearranging of the infrastructure is helpful. Ideally, cabling will not have to be modified or routing infrastructure changed.

New technologies, such as XDR or some next generation of SIEM, offer considerable advances in closing the gap on attacks and attackers. These are all contingent on visibility—the ability to immediately and accurately uncover the work of an attacker.

In rethinking the SIEM or bringing in a new center to integrate, correlate and analyze data from across the network, consider all the aspects of visibility as well. Visibility needs to go hand in hand with these advanced analytical solutions and the further use of machine learning and artificial intelligence to sustain and improve the defense of the wide attack surface in the ever-changing dynamic cyber battlefield.


from Help Net Security https://ift.tt/3c7FedF

Best practices for securing the CPaaS technology stack

Like everything that’s connected to the cloud, Communications Platform-as-a-Service (CPaaS) solutions are vulnerable to hacking, which increased dramatically as workforces shifted to remote and hybrid models because of the pandemic.

cpaas

For this reason and others, such a platform must be built secure by design. This means taking the time necessary to examine and re-examine code and configuration, then make appropriate changes prior to deployment. Several things must happen in tandem for this to be successful.

From authenticating to an API for advanced features to credential management, it is critical to have a deep understanding and awareness of data protection best practices.

Calculating risk vs benefit is an important first step before considering a CPaaS solution. It should also be part of an ongoing security practice after implementation given that each organization has a unique set of circumstances and requirements that can change unexpectedly. From the start, it is essential to obtain as much information as possible regarding a vendor’s maturity and understanding of the processes and tools that keep CPaaS communications secure. Does the company design and implement their system with protection as a driving principle? If so, what are these principles?

Certifications are certainly important to consider when evaluating options, but even so, certifications don’t mean security. It is a best practice to check on the maturity of these vendor-specific certifications, as some companies go through a process of self-certification that doesn’t necessarily ensure the level of security your organization needs. Sending a thoughtful questionnaire to multiple vendors can be helpful for scoring these vendor’s security, offering a holistic and specific viewpoint to be considered by an organization’s IT team.

On the customer end, in-house security and engineering staff can prep for CPaaS implementation by becoming familiar with the use of APIs and the authentication methods, communications protocols and the data that flows to and from them. Hackers routinely perform reconnaissance to find unprotected APIs and exploit them.

Once CPaaS is incorporated into the hybrid work model technology stack, it is a best practice for an organization to focus its sights on its endpoint management. The use of a centralized endpoint management system that pushes patches for BIOS, operating systems, and applications is necessary for protecting the cloud network and customer data once a laptop connects.

VPN security should include a quarantine feature that prevents laptops from joining until they are confirmed to be patched and that their anti-virus is still running. Furthermore, it’s necessary to go one step further and check that end users are not administrators on their work laptops so anti-virus programs continue to run, and potential virus attacks are blocked.

After CPaaS implementation, security protocols should continue to be thoroughly reviewed and updated each year along with technology standards, including examining Transport Layer Security (TLS) to make certain the cipher suite and algorithms in use meet or exceed requirements for data encryption.

It is the responsibility of the CPaaS partner and its security and technology teams to work with customers and bring to their attention recommended changes, such as replacing a cipher suite or an algorithm (or encryption key that supports it) for a particular circuit to make sure that the most appropriate and recent standards are in place.

In many instances, the deployment of the right CPaaS solution into an existing communication infrastructure can bolster data security. Here’s why: it puts call flow and data flow configurations right in the hands of business users, enabling them to know and understand where the data flows are without having to work through big implementation projects with engineering teams and digging into multiple legacy systems.

If they try to do the same type of work through their own programming, or through old legacy interactive voice response (IVR) solutions, those data flows are likely to get lost to the people that need to know about them most. By consolidating all that information into one platform, it helps a business not only understand where data is and where data flows but also talk more intelligently about privacy and confidentiality.

The financial sector is one industry seeing the real benefits of CPaaS in helping customers activate credit cards and set PINs while simultaneously using AI and real-time speech recognition to verify data necessary to prevent fraud. Similarly, in healthcare, various states, counties and healthcare organizations across the U.S. that have been able to quickly launch COVID-19 vaccine programs for scheduling and general information through CPaaS.

Through automated prompts and responses, people can easily access information and get answers to vaccine questions that otherwise may require a long and complex conversation with a healthcare provider. They can even schedule appointments safely and securely thanks to encryption in transit and encryption at rest, which has become common configuration in session initiation protocol (SIP) telephony only in the last couple of years.

Looking toward the future, there is undoubtedly more work to be done regarding security, particularly around identity management, access controls and two-factor authentication. This is especially important with the unpredictability of individual user and device security. There may be clever ways to improve unique person identifiers, like making the use of Secure Shell (SSH) keys very easy.

In addition to allowing for remote login from one system into another, SSH keys contain strong encryption, which makes them ideal not only for tasks associated with cloud computing but also securing remote workforces.

Right now, only engineers—and very few of them—use SSH keys, which are foundational to IaaS platforms such as Google Cloud, Microsoft Azure and AWS. With business objectives shifting and evolving, SSH keys may lay an invaluable role in further securing CPaaS. Until then, choosing a CPaaS partner wisely will help ensure the benefits far outweigh the risks.


from Help Net Security https://ift.tt/3p3iqRA

The value of SD-WAN connectivity

Masergy released the results of a research study assessing where businesses are in their journey to SD-WAN and Secure Access Service Edge (SASE). The results include new potential peaks in adoption and highlight the importance of reliability, security, and a growing preference for hybrid access. Findings from the research offer new insights into SD-WAN return on investment (ROI).

SD-WAN connectivity

Altman Solon surveyed more than 300 IT decision makers in U.S. headquartered businesses across more than 20 industries.

SD-WAN connectivity gaining traction

  • SD-WAN is gaining traction in the digital business environment: SD-WAN adoption is expected to rise to 92% of companies and 64% of sites by 2026 with most adopting it for efficiency (38%), cost savings (38%), and agility (34%).
  • Performance and security matter most: Solution reliability (~50%) and security (~60%) are top priorities for selecting an SD-WAN provider.
  • SASE is not yet understood: Despite the hype, 50%+ of IT decision makers don’t have a good understanding of its impact and business implications.
  • Majority of companies will use hybrid SD-WAN: ~58% expect to use a hybrid access model (a mix of both public and private access) over the next five years. Both private-only access users (63%) and public-only access users (55%) are considering a shift to hybrid access. Among respondents using a public-only or internet-only approach to SD-WAN, 50% said they would incorporate more private access because performance is insufficient for their critical applications.
  • Companies are relying on SD-WAN service providers: Today only 23% use a do-it-yourself solution, and 77% use a fully managed or co-managed solution.
  • Private connectivity is here to stay: Private connectivity will continue to play a prominent role in backing up SD-WAN architectures.

“This study affirms that IT leaders understand the value of SD-WAN connectivity and are leaning into hybrid access models that strike the right balance between price and performance for a ROI ‘sweet spot’,” said Terry Traina, CTO at Masergy.

“The survey confirms what everyone is noticing anecdotally. There’s still a lot of confusion about SASE. Conceptually SASE makes sense, but turning its framework into a tactical plan can be challenging for IT leaders.”


from Help Net Security https://ift.tt/3yTjn3E

How colocation can improve TCO for the enterprise

CoreSite and IDG released a report which examines the latest data center trends, strategies, requirements, and other findings from an annual quantitative survey and in-depth interviews with senior IT decision makers.

colocation can improve TCO

“As businesses continue to empower remote workers and fortify their digital footprints in response to pandemic-induced changes, colocation has emerged as an essential pillar in a successful hybrid IT strategy,” said John Gallant, Enterprise Consulting Director at IDG Communications.

“The 2021 survey findings demonstrate that a diverse multi-cloud hybrid IT architecture can fuel transformation, foster resiliency and achieve important business outcomes.”

Colocation lowers operational costs and improves TCO

  • Increased security and flexibility/scalability were reported as top two primary reasons to migrate workloads to colocation solution
  • Thirty-five percent surveyed say that colocation lowers operational costs and improves TCO
  • 46% of respondents say their companies are gravitating to an operating expense model for IT spending
  • Ninety-three out of 100 IT leaders say they are confident their colocation partners can enable future transformation initiatives
  • Ninety percent of respondents say interconnection to cloud providers is critical or important

One survey respondent, a senior director of a SaaS provider, stated “I don’t want to build my own data center – our competency is designing, engineering and maintaining software. I don’t want to get into the business of peering with carriers. Those are things that a colocation business provides.”

“The findings in this year’s State of the Data Center Report reflect what we are hearing from our customers and prospects,” said Steven Smith, Chief Revenue Officer at CoreSite.

“Colocation enables enterprises to save up to 60% on cloud connectivity, compared to telecommunications or software-defined network offerings, and roughly 70% for data replication expenses. Eliminating egress charges when restoring data from certain cloud availability zones is an example of how colocation can improve TCO for the enterprise.”


from Help Net Security https://ift.tt/3iaLK7v

Organizations have seen an increase in device encryption

32% of organizations have seen an increase in device encryption in the past year, according to a Vanson Bourne survey.

device encryption increase

Additionally, 31 percent noted that their organization now requires all data to be encrypted as standard, whether it’s at rest or in transit, and 24 percent require the encryption of all data when it’s being stored on their systems or in the cloud.

Further to this, 27 percent of surveyed IT decision makers stated their organization has increased the implementation of encryption in other ways – up from zero in the 2020 survey. This rise is likely due to organizations having had to operate in new working environments with increased remote working and the need to implement new systems and controls as a consequence.

Jon Fielding, Managing Director EMEA, Apricorn commented: “The pandemic upended business operations, with vast numbers thrown into remote working. Data traffic is no longer simply moving from the confines of the corporate network, but from numerous devices and from a multitude of locations.

“Encryption is increasingly recognised as a key component for data security and cyber resilience, especially at the highest levels. Examples include the use of encryption being one of very few technologies recommended within GDPR and Joe Biden’s recent Executive Order, stipulating the need to adopt encryption for data at rest and in transit. If ever there were a time to increase and execute the use of encryption, this is it!”

Lack of encryption and misplaced devices causing data breaches

Regardless of the increase in the use of encryption, when asked to select up to three main causes of a data breach within their organization, 30 percent of those surveyed report lack of encryption (12%) and lost/misplaced devices containing sensitive corporate information (18%) as main causes. This could be due to the absence of control over corporate data.

When reporting up to three of the biggest challenges associated with implementing a cyber security plan for remote/mobile working, 39 percent of those surveyed admitted they cannot be certain that their data is adequately secured, 18 percent said they don’t have a good understanding of which of their data sets need to be encrypted and 15 percent have no control over where company data goes and where it is stored.

Most organizations require encryption

That said, 77 percent confirmed their organization had a policy in place that requires encryption of all data held on removable media. Of those, 33 percent only allow the use of hardware encrypted organization-approved removable media; 18 percent only allow the use of organization-approved removable media, which aren’t hardware encrypted, but software encrypt everything written to them; 16 percent allow use of all removable media devices, including employees’ own USB sticks, but software encrypt everything written to them; and 10 percent have an alternative policy that requires encryption of all data held on removable media.

However, 20 percent of IT decision makers either tell their employees they are not permitted to use removable media (7%), or physically block all removable media (13%).

“Whilst businesses should only allow corporately approved, hardware encrypted devices to those with a business justification, not allowing, or physically blocking removable media can impede productivity and put data at risk. By deploying the right solutions at the endpoint, it not only allows employees to use their own hardware safely, but gives them autonomy, assisting operational agility and defending against the risk of cyberattack,” points out Fielding.

Positively, 88 percent state their organization has an information security strategy/policy that covers employees’ use of their own IT equipment for mobile/remote working, 22 percent of which allow only corporate IT provisioned devices and have security measures in place to enforce this with end point control.

The increase in encryption usage looks set to continue

When questioned on which devices their organization currently encrypts, and which they plan to expand encryption, surveyed IT decision makers noted that their organization already encrypts some devices, and plan to expand their encryption usage on USB sticks (19%), laptops (16%), desktops (12%), mobiles (22%) and portable hard drives (18%).

“Remote working has become the ‘new normal’ and it’s crucial that businesses now address any quick fix security solutions they had put in place and ensure the security of corporate data. The rise in endpoint control, and the plans for increased encryption are hugely positive, but this needs to be embedded in remote working policies if businesses are to avoid the potential for a data breach and failure to comply with existing regulations,” Fielding added.


from Help Net Security https://ift.tt/3c73Set

SeKVM: Securing virtual machines in the cloud

Whenever you buy something on Amazon, your customer data is automatically updated and stored on thousands of virtual machines in the cloud. For businesses like Amazon, ensuring the safety and security of the data of its millions of customers is essential. This is true for large and small organizations alike. But up to now, there has been no way to guarantee that a software system is secure.

SeKVM

Columbia Engineering researchers may have solved this security issue. They have developed SeKVM, the first system that guarantees – through a mathematical proof – the security of virtual machines in the cloud. The researchers hope to lay the foundation for future innovations in system software verification, leading to a new generation of cyber-resilient system software.

SeKVM as the first formally verified system for cloud computing

Formal verification is a critical step as it is the process of proving that software is mathematically correct, that the program’s code works as it should, and there are no hidden security bugs to worry about.

“This is the first time that a real-world multiprocessor software system has been shown to be mathematically correct and secure,” said Jason Nieh, professor of computer science and co-director of the Software Systems Laboratory. “This means that users’ data are correctly managed by software running in the cloud and are safe from security bugs and hackers.”

The construction of correct and secure system software has been one of the grand challenges of computing. Nieh has worked on different aspects of software systems since joining Columbia Engineering in 1999. When Ronghui Gu, the Tang Family Assistant Professor of Computer Science and an expert in formal verification, joined the computer science department in 2018, he and Nieh decided to collaborate on exploring formal verification of software systems.

Over the past dozen years, there has been a good deal of attention paid to formal verification, including work on verifying multiprocessor operating systems. “But all of that research has been conducted on small toy-like systems that nobody uses in real life,” said Gu. “Verifying a multiprocessor commodity system, a system in wide use like Linux, has been thought to be more or less impossible.”

Deploying hypervisors to support virtual machines

The exponential growth of cloud computing has enabled companies and users to move their data and computation off-site into virtual machines running on hosts in the cloud. Cloud computing providers, like Amazon, deploy hypervisors to support these virtual machines.

A hypervisor is the key piece of software that makes cloud computing possible. The security of the virtual machine’s data hinges on the correctness and trustworthiness of the hypervisor. Despite their importance, hypervisors are complicated – they can include an entire Linux operating system. Just a single weak link in the code – one that is virtually impossible to detect via traditional testing – can make a system vulnerable to hackers. Even if a hypervisor is written 99% correctly, a hacker can still sneak into that particular 1% set-up and take control of the system.

Nieh and Gu’s work is the first to verify a commodity system, specifically the widely-used KVM hypervisor, which is used to run virtual machines by cloud providers such as Amazon. They proved that SeKVM, which is KVM with some small changes, is secure and guarantees that virtual computers are isolated from one another.

“We’ve shown that our system can protect and secure private data and computing uploaded to the cloud with mathematical guarantees,” said Xupeng Li, Gu’s PhD student and co-lead author of the paper. “This has never been done before.”

SeKVM was verified using MicroV, a new framework for verifying the security properties of large systems. It is based on the hypothesis that small changes to the system can make it significantly easier to verify, a new technique the researchers call microverification. This novel layering technique retrofits an existing system and extracts the components that enforce security into a small core that is verified and guarantees the security of the entire system.

SeKVM to serve as a safeguard in various domains

The changes needed to retrofit a large system are quite modest–the researchers demonstrated that if the small core of the larger system is intact, then the system is secure and no private data will be leaked. This is how they were able to verify a large system such as KVM, which was previously thought to be impossible.

“Think of a house–a crack in the drywall doesn’t mean that the integrity of the house is at risk,” Nieh explained. “It’s still structurally sound and the key structural system is good.”

Shih-Wei Li, Nieh’s PhD student and co-lead author of the study, added, “SeKVM will serve as a safeguard in various domains, from banking systems and Internet of Things devices to autonomous vehicles and cryptocurrencies.”

As the first verified commodity hypervisor, SeKVM could change how cloud services should be designed, developed, deployed, and trusted. In a world where cybersecurity is a growing concern, this resiliency is highly in demand. Major cloud companies are already exploring how they can leverage SeKVM to meet this demand.


from Help Net Security https://ift.tt/3fBFmEq