Tuesday, April 30, 2019

As organizations continue to adopt multicloud strategies, security remains an issue

97 percent of organizations are adopting multicloud strategies for mission-critical applications and nearly two-thirds are using multiple vendors for mission-critical workloads, a Virtustream survey reveals.

adopt multicloud strategies

The study, conducted by Forrester Consulting, is based on a global survey of more than 700 cloud technology decision makers at businesses with more than 500 employees.

The study examines the current state of enterprise IT strategies for cloud-based workloads and details the increasing interest and needs of IT decision makers for multi-use cloud architectures.

Multicloud investments are on the rise

The study shows that multicloud deployments are here to stay, and investments look to increase over the next two years. Budgets for staffing, training and investments in multicloud strategies are on the rise, causing organizations to add new expertise and skills around maintenance, implementation and cost optimization.

Almost 90 percent of organizations predict they’ll maintain or increase their investment and staffing for multicloud deployments over the next two years. Specifically, 87 percent will maintain or increase training investment and 88 percent will maintain or increase investment in managed service support.

“Forrester’s new study confirms that organizations are investing in and adopting multicloud deployments for their mission-critical applications to derive significant improvements in agility, performance and cost savings,” said Joy Corso, Chief Marketing Officer, Virtustream.

“With the market’s continuing evolution to multicloud, IT decision makers are either hiring or turning to companies with deep expertise in end-to-end migration planning and services as well as deep expertise in automated, secure, highly scalable and high-performance cloud services for their mission-critical enterprise applications.”

Big business benefits associated with multicloud for mission-critical applications

A significant number of enterprises say they are using multicloud strategies today for mission-critical applications, with the top-ranked use cases centering on customer and financial data, in addition to sales applications.

This wave of adoption has raised confidence in using multicloud solutions for mission-critical applications – in fact, nearly 75 percent of organizations say they are using two to three cloud providers today for those business-critical applications.

Surveyed IT leaders showed a diverse set of use cases for their multicloud strategies and believe such an approach yields broad benefits, from increased performance and agility to improved efficiency and costs.

Performance and cost savings ranked as the top success metrics organizations use for evaluating these strategies. The third most cited benefit of multicloud is the ability to quickly and efficiently respond to changes and challenges within the business.

Security and management challenges top list of concerns

Multicloud deployments are complex, and nearly all surveyed organizations experienced issues with deploying and using multiple cloud environments. Although 61 percent feel their multicloud strategy is well aligned to their business objectives, security and management challenges are still reflected as the top issues with use, migration and deployment.

In response, organizations are looking to add staff with specific multicloud experience and to work with cloud vendors with expertise and managed service offerings.


from Help Net Security http://bit.ly/2ZVUkLM

How much does the average employee know about data privacy?

With the impacts and repercussions of the looming California Consumer Privacy Act (CCPA) on the minds of many privacy professionals, new research from MediaPRO shows more work is needed to train U.S. employees of this first-of-its-kind privacy regulation.

average employee data privacy

MediaPRO’s 2019 Eye on Privacy Report reveals 46 percent of U.S. employees have never heard of CCPA, which sets specific requirements for the management of consumer data for companies handling the personal data of California residents.

Passed last year and going into effect in January 2020, the CCPA has been referred to as a U.S. General Data Protection Regulation (GDPR) for its scope and focus on data rights. Privacy experts expect the law to apply to more than 500,000 U.S. companies. The 2019 Eye on Privacy Report findings suggest that raising employee awareness should play a key role in preparing for this new regulation.

Data privacy and the public

The CCPA awareness findings come from MediaPRO’s 2019 Eye on Privacy Report, a survey of more than 1,000 U.S.-based employees. The survey tested knowledge on data privacy best practices and privacy regulations in addition to gauging opinions on a variety of different privacy topics.

The survey presented participants with questions concerning when to report potential privacy incidents, what qualifies as sensitive data, how comfortable respondents were with mobile device apps having specific permissions, and the most serious threats to the security of sensitive data.

Additional findings from the report

  • 58 percent of employees said they had never heard of the PCI Standard, a global set of payment card industry (PCI) guidelines that govern how credit card information is handled.
  • 12 percent of employees said they were unsure if they should report a cybercriminal stealing sensitive client data while at work.
  • Technology sector employees were least likely to identify and prioritize the most sensitive information. For example, 73 percent of those in the tech sector ranked Social Security numbers as most sensitive, compared to 88 percent of employees in all other industries ranking this type of data as most sensitive.
  • Employees were more comfortable with a mobile device app tracking their device’s location than with an app accessing contact and browser information, being able to take pictures and video, and posting to social media.
  • Theft of login credentials was considered the most serious threat to sensitive data, with disgruntled employee stealing data and phishing emails coming next.

The findings give weight to the vital role employees play in a strong data privacy posture and the continuing need for privacy awareness training in protecting sensitive information. Working toward a “business-as-usual” approach to data privacy, with best practices embedded into all employee actions, is increasingly becoming a must for companies of all sizes.

“We’re at a pivotal time in history for privacy, and more people than ever are paying attention to privacy and data protection,” MediaPRO’s Chief Learning Officer Tom Pendergast said.

“Some of our survey results might make you think that people are starting to get it—but until everybody gets it, we in the privacy profession really can’t rest. In today’s world, protecting personal information really is everyone’s responsibility, and that’s why it’s up to us to champion year-round privacy awareness training programs that aim to create a risk-aware culture.”


from Help Net Security http://bit.ly/2UNqo0B

Security and compliance obstacles among the top challenges for cloud native adoption

Cloud native adoption has become an important trend among organizations as they move to embrace and employ a combination of cloud, containers, orchestration, and microservices to keep up with customers’ expectations and needs.

cloud native adoption obstacles

To discover more about the motivations and challenges of companies adopting cloud native infrastructure, the O’Reilly “How Companies Adopt and Apply Cloud Native Infrastructure” report surveyed 590 practitioners, managers and CxOs from across the globe, and found that while nearly 70 percent of respondents said their organizations have adopted, or at least have begun to adopt, cloud native infrastructure, more than 30 percent of respondents still have not adopted any sort of cloud native infrastructure.

Nearly 50 percent of survey respondents cited a lack of skills as the top challenge their organizations face in adopting cloud native infrastructures. Respondents also identified problems in migrating from legacy architecture and transforming their corporate culture.

Another top challenge included overcoming security and compliance obstacles – important hurdles that continue to require attention when considering cloud native implementations.

Other key findings include:

  • 40 percent of respondents use a hybrid cloud architecture. The hybrid approach can accommodate data that can’t be on a public cloud and can serve as an interim architecture for organizations migrating to a cloud native architecture.
  • 48 percent of respondents rely on a multi-cloud strategy that involves two or more vendors. This helps organizations avoid lock-in to any one cloud provider and provides access to proprietary features that each of the major cloud vendors provide.
  • 47 percent of respondents working in organizations that have adopted cloud native infrastructures said DevOps teams are responsible for their organizations’ cloud native infrastructures, signaling a tight bond between DevOps and cloud native concepts.
  • Among respondents whose organizations have adopted cloud native infrastructure, 88 percent use containers and 69 percent use orchestration tools like Kubernetes. These findings align with the Next Architecture hypothesis that cloud native infrastructure best meets the demands put on an organization’s digital properties.

cloud native adoption obstacles

“With today’s ever-changing technology advancements, there is a fundamental movement to adopt a cloud native infrastructure,” said Roger Magoulas, Vice President of O’Reilly Radar.

“Companies are rising to the occasion and meeting the increasing demands of users and customers, but it’s important to remember that true cloud native success takes time. Start small and focus on the shift of services gradually, while investing in the transition. As the cloud native market continues to develop, we expect to see many opportunities for tools and training to help ease the transition to new architectures and to bridge the cloud native skills gap.”


from Help Net Security http://bit.ly/2ISlNsk

5G brings great opportunities but requires a network transformation

Telecom operators are overwhelmingly optimistic about the 5G business outlook and are moving forward aggressively with deployment plans.

5G deployment plans

Twelve percent of operators expect to roll out 5G services in 2019, and an additional 86 percent expect to be delivering 5G services by 2021, according to a Vetiv survey of more than 100 global telecom decision makers with visibility into 5G and edge strategies and plans.

The “Telco Study Reveals Industry Hopes and Fears: From Energy Costs to Edge Computing Transformation” research covers 5G deployment plans, services supported by early deployments, and the most important technical enablers for 5G success.

According to the survey, those initial services will be focused on supporting existing data services (96 percent) and new consumer services (36 percent). About one-third of respondents (32 percent) expect to support existing enterprise services with 18 percent saying they expect to deliver new enterprise services.

As networks continue to evolve and coverage expands, 5G itself will become a key enabler of emerging edge use cases that require high-bandwidth, low latency data transmission, such as virtual and augmented reality, digital healthcare, and smart homes, buildings, factories and cities.

However, illustrating the scale of the challenge, the majority of respondents (68 percent) do not expect to achieve total 5G coverage until 2028 or later. Twenty-eight percent expect to have total coverage by 2027 while only 4 percent expect to have total coverage by 2025.

“While telcos recognize the opportunity 5G presents, they also understand the network transformation required to support 5G services,” said Martin Olsen, vice president of global edge and integrated solutions at Vertiv.

“This report brings clarity to the challenges they face and reinforces the role innovative, energy-efficient network infrastructure will play in enabling 5G to realize its potential.”

To support 5G services, telcos are ramping up the deployment of multi-access edge computing (MEC) sites, which bring the capabilities of the cloud directly to the radio access network. Thirty-seven percent of respondents said they are already deploying MEC infrastructure ahead of 5G deployments while an additional 47 percent intend to deploy MECs.

As these new computing locations supporting 5G come online, the ability to remotely monitor and manage increasingly dense networks becomes more critical to maintaining profitability. In the area of remote management, data center infrastructure management (DCIM) was identified as the most important enabler (55 percent), followed by energy management (49 percent).

Remote management will be critical, as the report suggests the network densification required for 5G could require operators to double the number of radio access locations around the globe in the next 10-15 years.

The survey also asked respondents to identify their plans for dealing with energy issues today and five years in the future when large portions of the network will be supporting 5G, which 94 percent of participants expect to increase network energy consumption. Among the key findings:

  • Reducing AC to DC conversions will continue to be an area of emphasis, with 79 percent of respondents saying this is a focus today and 85 percent saying it will be a focus five years from now.
  • New cooling techniques will see the biggest jump in adoption over the next five years. Currently being used by 43 percent of telcos worldwide, this number is expected to increase to 73 percent in five years.
  • Upgrades from VRLA to lithium-ion batteries also show significant growth. Currently, 66 percent of telcos are upgrading their batteries. Five years from now, that number is projected to jump to 81 percent.

“5G represents the most impactful and difficult network upgrade ever faced by the telecom industry,” said Brian Partridge, research vice president for 451 Research.

“In general, the industry recognizes the scale of this challenge and the need for enabling technologies and services to help it maintain profitability by more efficiently managing increasingly distributed networks and mitigating the impact of higher energy costs.”


from Help Net Security http://bit.ly/2J7V7Ds

CompTIA unveils a beta exam for its Cloud Essentials+ credential

CompTIA, the leading provider of vendor-neutral skills certifications for the global technology workforce, launched a beta exam for its CompTIA Cloud Essentials+ credential.

CompTIA Cloud Essentials+ validates the knowledge and skills required to make business decisions about cloud products and services. The certification is intended for both business and technology professionals responsible for evaluating the business value of cloud technologies and making informed decisions and recommendations for their organization.

“The cloud has sparked an evolution in thinking about the role of technology; from a behind-the-scenes tactical tool to a valuable strategic asset that turns businesses into digital organizations and makes greater innovation possible,” said Dr. James Stanger, chief technology evangelist for CompTIA.

“But to unlock its true value decision-makers must have a clear understanding about cloud technologies and their potential business impacts,” Stanger continued. “Individuals who are CompTIA Cloud Essentials+ certified have demonstrated that they have the knowledge and skills to make informed decisions and recommendations on the business case for the cloud.”

The CompTIA Cloud Essentials+ beta exam has been significantly changed and updated with 80 percent to 90 percent new content that addresses market demand and employers’ needs for professionals with validated cloud business skills. Areas covered in the exam include:

  • The components of what should be included in a comprehensive cloud assessment.
  • The business, financial and operational aspects of implementing cloud-based technologies.
  • Security, risk management and compliance threats and solutions evaluated from a business perspective.
  • New technology concepts, such as data analytics, the Internet of Things and blockchain, and how they relate to cloud services.

“These concepts are now covered in great detail in the certification exam,” said Kristin Ludwig, director, product management, CompTIA. “CompTIA is unique in that we approach these issues from a vendor-neutral perspective. That’s a critical differentiator because many organizations have embraced a multi-cloud environment employing the services of multiple cloud vendors.”

CompTIA recommends that candidates for the beta exam have between six and twelve months of work experience as a business analyst in an information technology environment with some exposure to cloud technologies.

Individuals who earn a passing score on the beta exam will become CompTIA Cloud Essentials+ certified. Beta test takers will be notified of test results after the launch of the official new exam, which is scheduled for mid-November.


from Help Net Security http://bit.ly/2UMBqTL

BioCatch launches a behavioral biometrics-based digital identity solution

BioCatch, the global leader in AI-driven behavioral biometrics, announced at the annual ForgeRock Identity Summit Americas that its behavioral biometrics-based digital identity solution is now available on the ForgeRock Marketplace.

Combining BioCatch’s industry-leading solution with ForgeRock’s Intelligent Authentication technology makes it easy for ForgeRock clients to implement passive authentication and prevent account takeover attacks for a better customer identity and access management experience.

Unlike one-time passwords and other static means of authentication that are easily circumvented or spoofed by today’s sophisticated fraudsters, BioCatch monitors the entire digital identity lifecycle, from account creation to login and beyond, providing continuous authentication throughout an online session.

In the process, BioCatch analyzes more than 2,000 parameters, and, if it detects activity that matches the behavioral profile of a fraudster, sends an alert to the ForgeRock Identity Platform. This reduces unnecessary escalations and maintains a seamless user experience. Other benefits include greater consistency and visibility across multiple digital channels, as the solution supports both web and mobile applications.

“Behavioral biometrics is entering the mainstream, as it is the only way to answer consumer demands for continuous security, privacy and convenience. As attention turns to implementation and scale, the BioCatch-ForgeRock integration demonstrates how it is not only possible, but easy, for organizations to deploy this technology,” said Avi Turgeman, Co-Founder and CTO of BioCatch.

“Just as we aim to enable secure and seamless online experiences for end users, we are also committed to working with partners that make it easy for CISOs to implement our market-leading technology into their environments.”

While 100% of the fraud that BioCatch sees today occurs after the login, reflecting the threat of account takeover attacks, a recent Aite Group Impact Report highlights the growing trend towards seamless authentication as well as consumers’ desire to avoid unnecessary friction when transacting online.

Behavioral biometrics is highlighted as one of the leading technologies that can address the paradox and provide a solution that addresses the need for security as well as user convenience. BioCatch has emerged as the industry leader in this space, with an unparalleled patent portfolio and an approach that works across the digital lifecycle.

“As behavioral biometrics becomes a must-have in the authentication suite, it requires integration into an orchestration and provisioning layer so that alerts are properly managed and streamlined into the organization’s overall workflow. Our partnership with BioCatch eliminates the complexity involved when implementing behavioral biometrics,” said Ben Goodman, Senior Vice President of Global Strategy and Innovation at ForgeRock.

“With tier-one customers around the globe and a proven ROI, BioCatch has set the standard for behavioral biometrics as a key component of next generation digital identity frameworks, and ForgeRock is one of the primary platforms enabling the enterprise to easily adopt it.”


from Help Net Security http://bit.ly/2GLsXLu

Verint adds Anomaly Detection to its VoC solutions

Verint Systems, The Customer Engagement Company, announced the addition of Anomaly Detection as a powerful new capability to its expanding Voice of Customer (VoC) solutions.

Anomaly Detection is part of Verint’s analytics-rich solution that helps companies automate insights and prioritize improvements to customer experience (CX) that will drive the greatest business impact.

According to an August 2018 report from Forrester Research, AI technologies have the potential to make customer experience (CX) measurement programs more effective and efficient.

Powered by AI and machine learning algorithms, Verint’s new Anomaly Detection capability helps teams understand, in near real time, more about the key factors and causes contributing to a change in customer satisfaction, NPS, or other drivers.

Anomaly Detection acts as a ‘virtual CX analyst,’ enabling faster, smarter issue resolution and less risk of bias. Machine learning algorithms run in the background, and surface significant, sudden changes in CX scores and top possible causes by analyzing thousands of data combinations that would be impossible to do manually.

Key features include:

  • Constant monitoring of significant changes to NPS, CSAT or driver scores based on past and predicted performance
  • Rapid investigation of most likely causes behind sudden changes in CX
  • Real-time alerts via SMS or email to speed time to action and resolution

“Millions of customer interactions happen every day, creating more feedback and new ways to gain insights,” said Jaime Meritt, CTO and chief architect, Verint.

“Our advancements in automation and machine learning help companies run enterprise-strength VoC programs that capture and analyze feedback, monitor dips and surges to CX metrics in real time and connect that data to CX drivers and outcomes. Verint VoC gives companies what they need to automate and operationalize CX.”


from Help Net Security http://bit.ly/2VA4AK3

Virtustream partners with Equinix Cloud Exchange and updates its platform

Virtustream, an enterprise-class cloud company and Dell Technologies business, announced a major expansion of its partnership with Equinix Cloud Exchange (ECX) and new platform updates to increase functionality, automation, speed-to-deployment and customer choice.

These enhancements cover all workloads, including mission-critical applications typically used for managing sensitive data like customer and financial details or patient information in the healthcare industry.

New connectivity options

Virtustream’s expanded partnership with ECX further extends network connectivity options to accelerate time-to-market for customers through simplified access to secure, reliable and high-performance direct connectivity for Virtustream Enterprise Cloud customers in North America and EMEA.

The expanded enhancements and support for the Equinix Cloud Exchange Fabric offers more customer control, minimizes security threats, and enables easier and faster connectivity access.

The expanded options include commercial Virtustream Enterprise Cloud nodes in North America and EMEA and a broader portfolio of private connectivity options building on existing IPSEC VPN, MPLS, and AT&T NetBond (selected markets) solutions, providing reduced complexity, simplified direct connectivity and vendor management enhancements.

While the portfolio provides secure, scalable and reliable connections to offer 99.999% availability-based QoS controls and low latency, the time-to-connect can be dramatically reduced from weeks to just hours in most cases. Furthermore, the Equinix Cloud Exchange Fabric provides streamlined private connectivity to all major hyperscale cloud providers for customers with multi-cloud requirements.

“We are delivering new innovations and capabilities at a rapid pace, so our customers can accelerate the value of their business,” said Deepak Patil, senior vice president, Cloud Platform and Services, Virtustream.

“Meeting our customers’ mission-critical needs to help them grow is at the core of our roadmap and we’ll continue to bring to market the kind of innovation and new offerings that unleash businesses and organizations to flourish in the clouds.”

Virtustream in the healthcare industry

Virtustream also announced the release of a major update to its enterprise-class Virtustream Healthcare Cloud. This update features new, advanced architecture components with improved flexibility and scale. Through improved automation, customers can greatly simplify the deployment and migration of EHR systems hosted in the Virtustream Healthcare Cloud.

Additionally, with this new release, Virtustream now supports the use of VMware Horizon for secure and flexible application access. With this update, Virtustream’s healthcare customers can improve their business agility, allowing for rapid access to a broad range of market-leading tools from Dell Technologies.


from Help Net Security http://bit.ly/2PDy5p0

ZeroNorth raises $10M to accelerate its focus on software and infrastructure risk management

ZeroNorth, the security industry’s first provider of orchestrated risk management, launched with a $10 million Series A investment led by ClearSky Ventures with participation from Crosslink Capital, Rally Ventures and existing investor Petrillo Capital.

The funding will enable ZeroNorth, formerly known as CYBRIC, to accelerate its newly-extended focus on software and infrastructure risk management by strengthening research and development, and investing in sales, marketing and services to meet growing demand for its platform. This round brings the company’s total funding to $18.6 million.

Organizations including Rodan & Fields, the University of Massachusetts and Zerto rely on ZeroNorth to proactively manage software and infrastructure risks as the pace of digital transformation continues to accelerate.

“Today every organization is in the software business. Software and the infrastructure it runs on are critical assets and continuous deployment is essential – but not at the expense of security,” said Peter Kuper, managing director at ClearSky Ventures.

“ZeroNorth makes it possible for organizations to have both fast and secure production software – something that was considered incompatible before. Most importantly, ZeroNorth makes it possible for organizations to easily discover and remediate vulnerabilities without disrupting the software development process. Its orchestration platform will be critical to protecting this software-defined world and why we are so excited to be a supporter of this effort.”

ZeroNorth accelerates and scales proactive software and infrastructure risk management by continuously orchestrating the discovery and remediation of vulnerabilities. Its “mission-control” orchestration platform enables organizations to construct and manage an automated and consistent software security program.

As a result, the platform directly provides board-level visibility into business risk, the assurance of better security, continuous proof of compliance and a more cost-effective risk management program.

Traditionally, organizations rely on multiple scanning tools to identify vulnerabilities in different phases of development, deployment and operation. However, each tool classifies vulnerabilities differently, has its own console and requires a dedicated employee to manage it.

Among the many challenges to this approach, it does not allow for a single, full-stack view of the constantly changing risks inherent in continuous deployment. In addition, relying on disconnected tools becomes expensive and difficult to staff amid a widening talent gap in cybersecurity. ZeroNorth transforms these manual and siloed efforts into an orchestrated, comprehensive and real-time discovery and remediation process.

“ZeroNorth gives us the visibility and assurance that we’re lowering risks to the organization. And it does so while reducing the staffing requirements for implementing and managing existing scanning tools and increasing their collective value,” said Amit Bhardwaj, vice president, IT security and compliance at Rodan & Fields.

“ZeroNorth is an important partner that gives us confidence in our security posture.”

Expanded focus, expanded team

As a result of this funding round, Peter Kuper and Patrick Heim from ClearSky Ventures, and Art Coviello from Rally Ventures will join Enrico Petrillo and Ernesto DiGiambattista on ZeroNorth’s board of directors. In addition, the company welcomes John Steven as its new chief technology officer (CTO) and Alan Deane as vice president of worldwide sales.

With more than two decades of software security experience and specific expertise in threat modeling, security architecture, static analysis and security testing, John Steven will lead ZeroNorth’s technical direction in defining and delivering solutions that will enable organizations to improve security through their digital transformation journey.

Prior to joining ZeroNorth, John was senior director at Synopsys, served as co-CTO at Cigital and was co-founder and CTO of Codiscope. John will team with vice president of engineering Andrei Bezdedeanu to drive innovation in the ZeroNorth platform that enables organizations to stay ahead of the ever-evolving threat landscape.

Alan Deane has more than two decades of experience leading worldwide sales organizations for cybersecurity startups and established industry players. He was most recently vice president of worldwide sales at DFLabs and spent six years as vice president of worldwide sales and field operations at Qumas. He served similar stints as vice president of the sales-risk & compliance business unit at McAfee, and vice president of sales at Foundstone.

“Proactively managing security and risk is about more than application security testing orchestration. Application vulnerability correlation and threat vulnerability management are important pieces of the puzzle that we’re delivering for customers grappling with the realities of digital transformation and managing risk in new environments,” said Ernesto DiGiambattista, ZeroNorth’s CEO and founder.

“We now have a broader focus that called for an expanded team and a new brand to match. With these pieces in place and the support of world-class investors, we’re ready to make proactive security a reality for organizations worldwide.”


from Help Net Security http://bit.ly/2DFnBAB

Is It Faster to Ask a Digital Assistant or Just Do It Yourself?


The Amazon Echo and Google Home have all sorts of abilities. But is asking them something out loud actually faster than doing it yourself?

To find out, we enlisted our friends at Gizmodo (who have an Echo) and presented a challenge with Lifehacker’s very own video producer, Abu. We timed how long it took to ask Alexa a question, and waited for her to complete the task. At the same time, Abu looked up the information on his phone (and, in some cases, performed the action IRL).

The verdict? Well, it depends. Certain answers Alexa gives are wordy, meaning a task like reading the day’s headlines is faster if you’re just glancing and scrolling. But if you have smart home devices hooked up to voice control, you could shave off a few seconds, say, turning on a light. And the biggest win for the digital assistant? Doing basically anything with dirty hands.


from Lifehacker http://bit.ly/2GUpagc

Securing edge devices – how to keep the crooks out of your network

We spend a lot of our online time out and about these days, using our mobile phones and connecting over cellular networks or public Wi-Fi…

…but most of us still have a network that we think of as ours, which we treat differently to the rest of the internet, the giant part that’s theirs.

Whether we’re at work or at home, we still have the notion of an edge to our network – that’s edge as in boundary, where we typically set up a router or a firewall to keep the inside and outside apart.

If only life were that simple!

Regular readers of Naked Security will know that when we write about network edge devices, such as routers, we often mean edge as in edginess, a word that denotes nervousness and tension.

In the past year, we’ve written about router takeovers, router vulnerabilities, router zombification, router malware, and even about what you might call a security malaise hanging over the world of internet devices.

What to do?

Today, the Cyber Threat Alliance (CTA), of which Sophos is a member, has published a fascinating and helpful report entitled – appropriately enough – Securing Edge Devices.

Produced by a collaboration of cybersecurity experts – competitors working together for the greater good, in fact – you will find it to be a great historical overview of router security blunders and how we can co-operate to prevent them happening in the future.

Whether you’re a programmer yourself, struggling to get cybersecurity right on a tiny budget amid a sea of pressing deadlines, or a user wondering what you can do to improve network security for your family in your own home…

this report is a great read.



from Naked Security http://bit.ly/2vuzk0I

Defending Democracies Against Information Attacks

To better understand influence attacks, we proposed an approach that models democracy itself as an information system and explains how democracies are vulnerable to certain forms of information attacks that autocracies naturally resist. Our model combines ideas from both international security and computer security, avoiding the limitations of both in explaining how influence attacks may damage democracy as a whole.

Our initial account is necessarily limited. Building a truly comprehensive understanding of democracy as an information system will be a Herculean labor, involving the collective endeavors of political scientists and theorists, computer scientists, scholars of complexity, and others.

In this short paper, we undertake a more modest task: providing policy advice to improve the resilience of democracy against these attacks. Specifically, we can show how policy makers not only need to think about how to strengthen systems against attacks, but also need to consider how these efforts intersect with public beliefs­ -- or common political knowledge­ -- about these systems, since public beliefs may themselves be an important vector for attacks.

In democracies, many important political decisions are taken by ordinary citizens (typically, in electoral democracies, by voting for political representatives). This means that citizens need to have some shared understandings about their political system, and that the society needs some means of generating shared information regarding who their citizens are and what they want. We call this common political knowledge, and it is largely generated through mechanisms of social aggregation (and the institutions that implement them), such as voting, censuses, and the like. These are imperfect mechanisms, but essential to the proper functioning of democracy. They are often compromised or non-existent in autocratic regimes, since they are potentially threatening to the rulers.

In modern democracies, the most important such mechanism is voting, which aggregates citizens' choices over competing parties and politicians to determine who is to control executive power for a limited period. Another important mechanism is the census process, which play an important role in the US and in other democracies, in providing broad information about the population, in shaping the electoral system (through the allocation of seats in the House of Representatives), and in policy making (through the allocation of government spending and resources). Of lesser import are public commenting processes, through which individuals and interest groups can comment on significant public policy and regulatory decisions.

All of these systems are vulnerable to attack. Elections are vulnerable to a variety of illegal manipulations, including vote rigging. However, many kinds of manipulation are currently legal in the US, including many forms of gerrymandering, gimmicking voting time, allocating polling booths and resources so as to advantage or disadvantage particular populations, imposing onerous registration and identity requirements, and so on.

Censuses may be manipulated through the provision of bogus information or, more plausibly, through the skewing of policy or resources so that some populations are undercounted. Many of the political battles over the census over the past few decades have been waged over whether the census should undertake statistical measures to counter undersampling bias for populations who are statistically less likely to return census forms, such as minorities and undocumented immigrants. Current efforts to include a question about immigration status may make it less likely that undocumented or recent immigrants will return completed forms.

Finally, public commenting systems too are vulnerable to attacks intended to misrepresent the support for or opposition to specific proposals, including the formation of astroturf (artificial grassroots) groups and the misuse of fake or stolen identities in large-scale mail, fax, email or online commenting systems.

All these attacks are relatively well understood, even if policy choices might be improved by a better understanding of their relationship to shared political knowledge. For example, some voting ID requirements are rationalized through appeals to security concerns about voter fraud. While political scientists have suggested that these concerns are largely unwarranted, we currently lack a framework for evaluating the trade-offs, if any. Computer security concepts such as confidentiality, integrity, and availability could be combined with findings from political science and political theory to provide such a framework.

Even so, the relationship between social aggregation institutions and public beliefs is far less well understood by policy makers. Even when social aggregation mechanisms and institutions are robust against direct attacks, they may be vulnerable to more indirect attacks aimed at destabilizing public beliefs about them.

Democratic societies are vulnerable to (at least) two kinds of knowledge attacks that autocratic societies are not. First are flooding attacks that create confusion among citizens about what other citizens believe, making it far more difficult for them to organize among themselves. Second are confidence attacks. These attempt to undermine public confidence in the institutions of social aggregation, so that their results are no longer broadly accepted as legitimate representations of the citizenry.

Most obviously, democracies will function poorly when citizens do not believe that voting is fair. This makes democracies vulnerable to attacks aimed at destabilizing public confidence in voting institutions. For example, some of Russia's hacking efforts against the 2016 presidential election were designed to undermine citizens' confidence in the result. Russian hacking attacks against Ukraine, which targeted the systems through which election results were reported out, were intended to create confusion among voters about what the outcome actually was. Similarly, the "Guccifer 2.0" hacking identity, which has been attributed to Russian military intelligence, sought to suggest that the US electoral system had been compromised by the Democrats in the days immediately before the presidential vote. If, as expected, Donald Trump had lost the election, these claims could have been combined with the actual evidence of hacking to create the appearance that the election was fundamentally compromised.

Similar attacks against the perception of fairness are likely to be employed against the 2020 US census. Should efforts to include a citizenship question fail, some political actors who are disadvantaged by demographic changes such as increases in foreign-born residents and population shift from rural to urban and suburban areas will mount an effort to delegitimize the census results. Again, the genuine problems with the census, which include not only the citizenship question controversy but also serious underfunding, may help to bolster these efforts.

Mechanisms that allow interested actors and ordinary members of the public to comment on proposed policies are similarly vulnerable. For example, the Federal Communication Commission (FCC) announced in 2017 that it was proposing to repeal its net neutrality ruling. Interest groups backing the FCC rollback correctly anticipated a widespread backlash from a politically active coalition of net neutrality supporters. The result was warfare through public commenting. More than 22 million comments were filed, most of which appeared to be either automatically generated or form letters. Millions of these comments were apparently fake, and attached unsuspecting people's names and email addresses to comments supporting the FCC's repeal efforts. The vast majority of comments that were not either form letters or automatically generated opposed the FCC's proposed ruling. The furor around the commenting process was magnified by claims from inside the FCC (later discredited) that the commenting process had also been subjected to a cyberattack.

We do not yet know the identity and motives of the actors behind the flood of fake comments, although the New York State Attorney-General's office has issued subpoenas for records from a variety of lobbying and advocacy organizations. However, by demonstrating that the commenting process was readily manipulated, the attack made it less likely that the apparently genuine comments of those opposing the FCC's proposed ruling would be treated as useful evidence of what the public believed. The furor over purported cyberattacks, and the FCC's unwillingness itself to investigate the attack, have further undermined confidence in an online commenting system that was intended to make the FCC more open to the US public.

We do not know nearly enough about how democracies function as information systems. Generating a better understanding is itself a major policy challenge, which will require substantial resources and, even more importantly, common understandings and shared efforts across a variety of fields of knowledge that currently don't really engage with each other.

However, even this basic sketch of democracy's informational aspects can provide policy makers with some key lessons. The most important is that it may be as important to bolster shared public beliefs about key institutions such as voting, public commenting, and census taking against attack, as to bolster the mechanisms and related institutions themselves.

Specifically, many efforts to mitigate attacks against democratic systems begin with spreading public awareness and alarm about their vulnerabilities. This has the benefit of increasing awareness about real problems, but it may ­ especially if exaggerated for effect ­ damage public confidence in the very social aggregation institutions it means to protect. This may mean, for example, that public awareness efforts about Russian hacking that are based on flawed analytic techniques may themselves damage democracy by exaggerating the consequences of attacks.

More generally, this poses important challenges for policy efforts to secure social aggregation institutions against attacks. How can one best secure the systems themselves without damaging public confidence in them? At a minimum, successful policy measures will not simply identify problems in existing systems, but provide practicable, publicly visible, and readily understandable solutions to mitigate them.

We have focused on the problem of confidence attacks in this short essay, because they are both more poorly understood and more profound than flooding attacks. Given historical experience, democracy can probably survive some amount of disinformation about citizens' beliefs better than it can survive attacks aimed at its core institutions of aggregation. Policy makers need a better understanding of the relationship between political institutions and social beliefs: specifically, the importance of the social aggregation institutions that allow democracies to understand themselves.

There are some low-hanging fruit. Very often, hardening these institutions against attacks on their confidence will go hand in hand with hardening them against attacks more generally. Thus, for example, reforms to voting that require permanent paper ballots and random auditing would not only better secure voting against manipulation, but would have moderately beneficial consequences for public beliefs too.

There are likely broadly similar solutions for public commenting systems. Here, the informational trade-offs are less profound than for voting, since there is no need to balance the requirement for anonymity (so that no-one can tell who voted for who ex post) against other requirements (to ensure that no-one votes twice or more, no votes are changed and so on). Instead, the balance to be struck is between general ease of access and security, making it easier, for example, to leverage secondary sources to validate identity.

Both the robustness of and public confidence in the US census and the other statistical systems that guide the allocation of resources could be improved by insulating them better from political control. For example, a similar system could be used to appoint the director of the census to that for the US Comptroller-General, requiring bipartisan agreement for appointment, and making it hard to exert post-appointment pressure on the official.

Our arguments also illustrate how some well-intentioned efforts to combat social influence operations may have perverse consequences for general social beliefs. The perception of security is at least as important as the reality of security, and any defenses against information attacks need to address both.

However, we need far better developed intellectual tools if we are to properly understand the trade-offs, instead of proposing clearly beneficial policies, and avoiding straightforward mistakes. Forging such tools will require computer security specialists to start thinking systematically about public beliefs as an integral part of the systems that they seek to defend. It will mean that more military oriented cybersecurity specialists need to think deeply about the functioning of democracy and the capacity of internal as well as external actors to disrupt it, rather than reaching for their standard toolkit of state-level deterrence tools. Finally, specialists in the workings of democracy have to learn how to think about democracy and its trade-offs in specifically informational terms.

This essay was written with Henry Farrell, and has previously appeared on Defusing Disinfo.


from Schneier on Security http://bit.ly/2GV05ls

Which cyber threats should financial institutions be on the lookout for?

Banks and financial services organizations were the targets of 25.7 percent of all malware attacks last year, more than any other industry, IntSigths revealed in their latest report.

2019 cyber threats finance

These include:

  • Trojans (banking, info-stealing, downloaders)
  • ATM malware (since the start of 2018, more than 20 ATM malware families have hit banks around the globe)
  • Ransomware (Mexican financial institutions were particularly targeted)
  • Mobile banking malware – both fake banking apps and banking Trojans. (According to the company, fake mobile banking apps that mimic major blue-chip banking apps have proven to be remarkably successful endeavors for hackers.)

Types of attacks employed

Aside from being targeted with malware, financial institutions were also hit with DDoS attacks.

Phishing – made easier by phishing kits offered for sale on dark web markets – continues to be one of the most common methods cybercriminals use to target financial organizations and their customers.

A relatively new and rarely used attack vector has been flagged in February 2019, when UK-based Metro Bank became the first publicly reported victim of SMS verification code interception. Cybercriminals exploited flaws in the SS7 telecommunication protocol to intercept messages that authorize payments from accounts and emptied a small number of customers’ bank accounts.

“This was not the first instance of an SS7 exploitation. However, Metro Bank was the first bank to be publicly identified as a victim of this kind of attack,” Rosenberg pointed out.

Finally, according to IntSights research, there has been a marked targeting of banks and financial institutions in developing regions of the world, mainly Latin America, Africa, and South Asia (primarily India and Pakistan). SWIFT ISAC also reported that cyberattacks involving the SWIFT system are mostly directed at institutions in those parts of the world.

It’s not difficult to see why: financial organizations in those countries lack the comprehensive security systems that are common in more developed areas.

A spike in data leaks

The leak of Collection #1 and Collection #2-5 resulted in a big spike in leaked credentials during Q1 2019. The amount of leaked credit card data has also skyrocketed in the same period.

2019 cyber threats finance

“Cybercriminals use these compromised credit card numbers to primarily make small purchases, as this practice does not often attract unwanted attention. However, these small purchases can generate nearly ten times more “free money” than what the card is worth on the black market,” Rosenberg explained.

“Since credit card companies will typically reimburse customers who have been victimized by fraudulent credit card usage, cybercriminals find stealing card numbers to be a relatively safe and simple way to generate profits. The risks are small and the potential gains are significant.”


from Help Net Security http://bit.ly/2WgG9OW

Monday, April 29, 2019

Making the most of threat intelligence with threat intelligence gateways

Even though many security professionals are still dissatisfied with threat intelligence accuracy and quality, its use as a resource for network defense is growing. According to the 2019 SANS Cyber Threat Intelligence (CTI) Survey, the percentage of organizations that either produce or consume CTI has risen from 60 to 72 percent.

threat intelligence gateways

As it gets more broadly adopted and as more organizations seek to operationalize their TI more effectively and efficiently, they are slowly starting to implement threat intelligence gateways (TIGs).

What are threat intelligence gateways?

There was a time when threat intelligence was synonymous with indicators of compromise (IoCs), but is now generally considered to also include tactics, techniques and procedures (TTPs), threat behaviors, attack surface awareness and strategic assessments. This security data and information is then used to create a picture of an organization’s digital risk and to manage it.

Threat intelligence gateways are an emerging cybersecurity category. Fundamentally, the solution sits on the network, in line, typically in front of a firewall, and filters inbound and outbound traffic based on a wide array of TI from multiple sources/feeds (commercial, open source, industry, and government).

TIGs can also make allow or deny decisions based on the source of the traffic.

“Gartner defines TIGs as ‘a network security solution that filters traffic based on large volumes of threat intelligence (TI) indicators’,” Todd Weller, Chief Strategy Officer at Bandura Cyber, explained to Help Net Security. “We define TIG a bit more broadly, because our TIG goes beyond just filtering: we provide access to TI, aggregation, automation, and the critical ‘taking action’ element.”

TIGs are not an alternative to traditional threat intelligence services – they complement them, he noted. They provide security teams with the ability to detect and block traffic based on threat intelligence at a scale that their next-generation firewalls (NGFWs) don’t allow.

“NGFWs work well with threat intelligence from the NGFW vendor but often don’t play nice with third-party TI indicators (IPs and domains). Also, for performance reasons, many NGFWs significantly limit the volume of third-party indicators you can ingest and take action with. For most NGFWs, the volume is limited to a few hundred thousand indicators, whereas the number of indicators on many threat feeds can be in the millions and tens of millions,” Weller explained.

“Additionally, managing third party TI in NGFWs is cumbersome and time consuming. Organizations that aren’t using a Threat Intelligence Platform (TIP) from companies like Anomali, Threat Connect, ThreatQuotient, and others, also find value in our ability to aggregate multiple threat feeds in one place and have them automatically updated. This reduces a lot of manual effort.”

What’s in it for the organizations

Weller says that they have seen a significant increase in customer interest in TIGs over the last twelve months, and expect this trend to continue.

While large enterprises – as threat intelligence power users – welcome the ability to filter traffic against over 100 million unique IPs and domains with virtually no latency and to easily integrate with TI sources and their existing security systems like SIEMs, small and mid-sized organizations are looking at TIGs as another layer of defense.

“These companies don’t have significant resources or operate with a big security operations center or armies of analysts but they have the same cybersecurity problems. TIGs enable these customers to gain access to enterprise-grade TI capabilities in an easy, automated, and affordable way,” he notes.

“For them it’s really about ease of use and manageability and the plug-n-play nature of the TIG. They love the fact that they can quickly deploy and gain value from a turnkey solution that is automated and has low management overhead.”

He also noted an increased interest from managed security services providers (MSSPs) that are looking to offer value-added threat intelligence services to their customer base, and expects threat intelligence vendors to start offering TIGs in the near future.

“Many TI vendors have historically focused on large enterprises that had the resources to buy and consume third-party threat intelligence. I can tell you first-hand many of these TI vendors are looking at ways to broaden their market and revenue opportunity. In the near term, I’d look for more strategic partnerships along these lines. Longer term, I see the potential for consolidation,” he opined.


from Help Net Security http://bit.ly/2UQ0Yzo

Hacking our way into cybersecurity for medical devices


Hospitals are filled with machines connected to the internet. With a combination of both wired and wireless connectivity, knowing and managing which devices are connected has become more complicated and, consequently, the institutions’ attack surface has expanded.

When did these devices get smart?

A brief timeline shows the FDA didn’t start regulating the connectivity of devices until 2005, but medical devices started to leverage software back in the ‘80s. Clinical capabilities have benefited greatly from this digitalization, bringing features, data collection and analytic computing to clinical care. Some devices that have been digitized include pacemakers, infusion pumps, ventilators, CT and MRI scanners, all of which (as a result) contain patient information and have some level of connectivity. Walk into a healthcare conference today and you’ll be hard-pressed to find devices that don’t offer connectivity via wires, Bluetooth, and/or wirelessly.

Wearable devices and at-home medical devices are also becoming increasingly common. The ability for a device to transmit vital sign data from a patient’s home to hospital staff has encouraged the expansion of the telehealth industry. It has also made possible for the emergency medical community to respond to device alerts. In some cases, health insurance companies use data from fitness trackers to incentivize medical expense management and “wellness” promotions.

Information exposure and theft

There was a time when medical devices relied on physical security to limit who could update a device. But to enhance clinical experience, many of these devices have since been retrofitted so they can be networked and managed remotely by both provider and vendor.

These devices often carry patient personal information (such as Social Security numbers), health insurance information, contact information and information about health conditions. Connectivity and inevitable software vulnerabilities mean that this data can potentially be exposed.

There are some predictable schemes for obtaining a person’s SSN – insurance claims, tax filings, rebate claims, bank loan documents. More healthcare-specific is the idea of a deceased patient’s SSN being used to run a scheme, as there tends to be less monitoring of financial activity after someone has died. There are also those who use insurance and contact information to claim prescriptions or run phishing schemes on an aging population. The combination of patient health factors and geographic location can sometimes also allow scammers to pinpoint a person’s identity and discover other personal information that can be of use.

Understanding devices in the field

When a connected medical device is procured by an HDO, the terms for ongoing support are a critical component of the negotiation. This often includes medical device manufacturers (MDMs) supporting device bug resolution, patching for known vulnerabilities, and enhancements to security over the warrantied lifetime of the device.

However, there is no mandate to remove a device that’s past vendor warranty from operation. With payers influencing HDO procurement strategies, devices that “still work” can be difficult to throw out, especially when a cybersecurity vulnerability is “theoretical.”

Imagine a vulnerability is identified on a single device that is no longer under warranty. This means the vendor no longer provides software patches. This vulnerability may be exploited to other installations of the same device. Devices that are no longer receiving updates for known vulnerabilities are an exponential threat for hackers looking for an entry point into critical healthcare data.

An additional consideration is the development practices for medical devices. Many MDMs develop their software on commercial operating systems such as Windows. Software is phased out all the time – it’s part of the development life cycle. But, for example, the end of Windows 7 support in 2020 means medical devices in the field that run Windows 7 will become more vulnerable with each passing day. Every virus or malware attempt will no longer face Microsoft’s security capabilities and improvements. These devices and HDOs will have to fend for themselves.

Exploitation

Setting aside the data available on a device, there is also the possibility of attackers using devices as a gateway into an HDO’s network. Due to budgeting decisions and the organizations’ preference for clinical investments, hospitals IT departments often work with limited resources. In some instances, the limited allocation of resources towards recovery procedures has made HDOs especially susceptible to ransomware attacks.

Some have suggested that a hospital should revert to emergency protocols (i.e. pencil and paper mode) to operate during a cyber attack, as occurred when parts of the NHS were shut down due to WannaCry. This can limit the impact of attacks on elective procedures, but what about patients with urgent needs?

Research shows a 13.3% higher mortality rate for cardiac arrest patients who experienced a four-minutes delay in care. And a delay in care due to a network takeover by hackers is likely to be more than four minutes.

Even in the wake of multiple HDOs implementing better security practices after an attack, there is evidence of negative outcomes for patients in facilities with a historic breach. The 0.04% increase in mortality rate observed is the equivalent of the 0.04% increase in positive outcomes for patients based on enhanced treatments.

What happens next?

The FDA draft premarket cybersecurity guidance from October 2018 recommends incorporating the NIST Cybersecurity Framework (NIST-CSF). NIST-CSF includes a combination of both technical and procedural interventions into both the design and support of devices. While there is no risk rating associated with the NIST-CSF sub-categories, the technical sub-categories tend to require more effort and technical sophistication to implement.

However, there is no need for healthcare to go at it alone – we can learn from other industries. We have seen the financial services industry, often perceived as a cybersecurity leader, manage cyber threats through leveraging tools to implement and maintain security over time. The migration away from building personalized data centers to using commercially available cloud-based service providers is a prime example of this. There have been numerous case studies showing how cloud hosting enhances security responses (especially redundancy & availability), expedites product development and reduces maintenance cost over the lifetime of a product.

As medical device manufacturers develop new products and update products currently in the field, using relevant tools to address the FDA premarket guidance and incorporating industry leading best practices is surely the most sustainable and scalable approach.


from Help Net Security http://bit.ly/2J5TLci

SEC demands better disclosure for cybersecurity incidents and threats


As companies increasingly rely on networked systems and on the Internet, cybersecurity threats have grown. Companies that fall victim to a successful cyberattack incur substantial costs for remediation, including increased costs for cyber protection, lost revenues, legal costs and more. All of these costs can impact the riskiness and value of a public company’s stock.

Given the frequency, magnitude and cost of cybersecurity incidents, the Securities and Exchange Commission (SEC) has stated that it is “crucial for public companies to inform investors about relevant cybersecurity risks and incidents in a timely fashion.”

In February of 2018, the SEC issued a Commission Statement and Guidance that spelled out principles that public companies should follow in making disclosures about cybersecurity dangers and attacks. This guidance expands on a previous SEC staff guidance released in 2011 and addresses two new topics:

1. Cybersecurity disclosure policies.
2. The application of insider trading prohibitions in a cybersecurity context.

The following are the five key issues the SEC outlines in the guidance. Note that this discussion is for information only. For personalized compliance recommendations, please consult a lawyer.

Materiality

One of the highlights of the 2018 guidance is the issue of materiality. In the past, when companies filed disclosures required by the Securities Act of 1933 and the Securities Exchange Act of 1934, they may have disclosed cybersecurity risks and incidents on a periodic basis or when issues became “material”—significant enough to disclose—delaying disclosure when an incident was still under investigation.

The 2018 guidance lowers the threshold for disclosure. Companies should now disclose “known trends and uncertainties,” says Brian V. Breheny, a partner who heads the SEC Reporting and Compliance Practice at Skadden, Arps, Slate, Meagher & Flom LLP. “If something is reasonably likely to result in a material impact on the company, you should give investors an early warning.”

In determining what is material, the guidance suggests that companies consider the nature, extent and potential magnitude of the event and the harm such incidents could cause. Companies should disclose enough information so that statements are not misleading and correct prior disclosures that later prove to be untrue. On the other hand, the SEC does not intend companies to make disclosures detailed enough to compromise their cybersecurity efforts.

2. Types of security risks that must be disclosed

Item 503 (c) of Regulation S-K (of US Securities Act of 1933) and Item 3.D of Form 20-F (which must be submitted by “foreign private investors”) require companies to disclose the most significant factors that make investments in their securities speculative or risky. The new guidance recommends that companies include cybersecurity risks and incidents in these disclosures. The SEC advises companies to avoid generic disclosures and tailor them to their particular cybersecurity risks and incidents.

When David J. Lavan, Partner at Dinsmore & Shohl LLP and former special counsel in the Division of Corporate Finance at the SEC, works with clients, some key risk factors he considers include:

  • What industry is the company in? Some industries are subject to more cybersecurity threats than others. Finance, healthcare, retail and utilities are far more likely to be attacked than construction, for example.
  • Has the company had any cyber-related incidents? What type of incidents have they had?
  • About whom do they have data? Customers? Employees? Agents? Deposit holders? Policy holders?
  • What information does the company store or transmit? Personally identifiable information? Healthcare info? Proprietary info? Info in the public domain?
  • What regulations is the company required to comply with? NYDFS? GDPR? California’s Consumer Privacy Act?
  • Is there anything in the contract with the company hosting the client’s data or providing cloud services that might impact other companies storing information in that facility?
  • Does the company have business recovery procedures in place?
  • Does the company have insurance? How does this affect the company’s ability to recover from a cybersecurity incident? Disclosing this in the 10K helps investors understand who is responsible for cyber-related operational risk.
  • Does the board understand its disclosure responsibility?
  • Does the company understand how to perform cyber-related risk reporting? Can they report fast enough for the risks to be considered properly by the company’s disclosure committee.
  • Are the security risks changing? Has there been an uptick in clients getting pinged even if no one’s getting through?

Disclosure policies and procedures

The Guidance encourages companies to adopt comprehensive cybersecurity policies and procedures and regularly assess their sufficiency and compliance. The assessment should include the efficiency of the company’s disclosure controls and procedures related to cybersecurity risk.

Explains N. Peter Rasmussen, Senior Legal Analyst, at Bloomberg Law, “Cybersecurity incident teams should be well coordinated with disclosure compliance and other non-IT professionals within the company. Disclosure controls and procedures should ensure that relevant information about cybersecurity risk is collected and documented in a timely fashion and that it is reported to the appropriate personnel to assess its materiality.”

“Companies are under cyberattack all the time. Whether these ongoing risks become material and whether they need to be disclosed are different questions,” explains Breheny. “The issue is whether individuals involved in cybersecurity are elevating issues that come up quickly enough and to the right people to determine whether something needs to be disclosed.”

The company’s CEO and CFO must certify the controls. If the company’s controls and procedures fail to ensure that information about a cyber incident is properly raised for timely disclosure, and the company made the certifications anyway, the CEO and CFO could be at risk for enforcement action.

Role of officers and the board

Item 407(h) of Regulation S-K and Item 7 of Schedule 14A requires companies to disclose the board of directors’ role in overseeing company risks, including how the board administers its oversight function and the effect this has on the board’s leadership structure. With the 2018 guidance, the SEC emphasizes the board’s role in monitoring and overseeing cybersecurity risk. The guidance implies that cybersecurity is clearly a board-level concern – not just a matter for the tech department.

La Fleur C. Browne, Associate General Counsel and Assistant Secretary, Church & Dwight says her firm’s board has a disclosure committee that regularly evaluates what needs to be included in disclosure statements. “Different people might view materiality differently. When the guidance first came down, our disclosure committee met with our IT department to review the guidance, discuss the types of threats they see, and explain that they should let the committee know what’s going on. IT has committed to report any cybersecurity incidents to the disclosure committee, which in turn determines whether the issue is material and should be disclosed.”

The head of IT now attends Church & Dwight Disclosure Committee meetings to provide updates on cybersecurity so the committee can have informed discussions. The disclosure keeps the CEO and CFO Informed about what IT is seeing and whether it’s material to the company. Our board also has a cybersecurity item on the agenda of every board meeting and has a deep dive discussion about cyber security at least once a year.”

Insider trading

Finally, the 2018 Guidance requires companies/directors to comply with laws regarding insider trading in connection with information about cybersecurity risks and incidents. Companies should have well designed policies and procedures to prevent insider trading based on cybersecurity risks and incidents.

Overall, in light of the 2018 Guidance, says Rasmussen, “It’s fair to say that we can expect the SEC will take a closer look at cybersecurity disclosures by public companies. Issuers must anticipate the questions the SEC will have. And the SEC has indicated that it will emphasize risk factor disclosures, the timely disclosure of cyber incidents, insider trading controls and the effectiveness of the company’s data security policies and internal accounting controls. We can expect to see greater enforcement activity based on inadequacies in these areas of disclosure.”


from Help Net Security http://bit.ly/2vvSNha

Most adults are concerned about malware and phishing on social media

More than eighty percent of adults believe that they’re at risk when it comes to security on social media.

adults social media security

Most American adults are using at least one social media platform daily, and three-quarters are interested in protecting themselves and their privacy, according to the 2019 Study on Social Media Privacy and Security Concerns released by ID Experts.

63% of adults visit Facebook every day, 42% visit YouTube and 29% use Instagram on a daily basis. Users are most concerned about Facebook – 68% are concerned about their privacy and security on Facebook, compared to 40% on Instagram and 39% YouTube.

Tom Kelly, president and CEO of ID Experts, commented, “What we’ve learned is that the majority of Americans today are concerned about social media privacy. Their level of concern does not change based on age, gender, ethnicity, socioeconomic status or political leaning. It’s amazing how so many ubiquitous forums can be the hubs of thriving social lives but also the source of so much unease.”

Risk on social media takes many forms, but respondents ranked malware and phishing as their greatest concerns, with 82% of adults expressing concern for these threats. Nearly eight in ten adults (79%) are concerned about account takeover, which occurs when someone gets access to your account to post content and lock you out.

78% are concerned with account impersonation as well as bots used to steal data or send spam. 3 in 4 adults are interested in assistance in detecting and taking down fraudulent links and fake social media accounts.

adults social media security

The vast majority of adults believe that children, teens and seniors are at risk on social media, and over half rank those groups as highly at risk. 92% of seniors (65+) consider themselves at risk when it comes to security on social media, but for younger adults, the risks don’t feel as personal, though 89% of Generation Z (ages 18-21) said they believe teenagers are at risk. Generation Z was also found to be the most likely to quit a social media platform due to concerns over privacy.

The research was conducted by Morning Consult and included interviews of over 2,200 adults across the country who represent a diverse sample of gender, age, race, political and religious beliefs, educational backgrounds and income levels.


from Help Net Security http://bit.ly/2LfcMLS

Companies face regulatory fines and cybersecurity threats, still fail to protect sensitive data

22% of a company’s folders are accessible, on average, to every employee, according to the new report from the Varonis Data Lab, which analyzed more than 54 billion files.

fail to protect sensitive data

The report shines a light on security issues that put organizations at risk from data breaches, insider threats and crippling malware attacks.

Key findings from the 2019 Global Data Risk Report include:

Out-of-control permissions expose sensitive files and folders to every employee:

  • 53% of companies had at least 1,000 sensitive files open to all employees.
  • 22% of all folders were accessible, on average, to every employee.

User passwords that never expire give hackers ample time to brute-force logins:

  • 38% of users had passwords that never expire, up from 10% last year.
  • 61% of companies have over 500 users with passwords that will never expire.

Stale sensitive files raise the risk of fines under HIPAA, GDPR and the upcoming CCPA:

  • 87% of companies have over 1,000 stale sensitive files.
  • 71% of companies have over 5,000 stale sensitive files.

“Ghost” users give former employees and contractors unnecessary access to information:

  • 50% of user accounts were stale.
  • 40% of companies had over 1,000 enabled, but stale, users.

Industries and regions vary when it comes to protecting their most sensitive information:

  • Retail organizations had the lowest number of exposed, sensitive files and seemed to do the best job of protecting their data overall. Financial services firms found the most exposed, sensitive files overall. Healthcare, pharmaceutical and biotech firms found the most exposed, sensitive files in each terabyte that they analyzed (4,691).
  • APAC organizations found that less than 1% of their files were sensitive, but 26% of them were exposed. EMEA organizations found sensitive data in 3% of their files, but only 15% of them were exposed. In EMEA, each terabyte averaged 4,724 exposed, sensitive files.

“One year after the GDPR and nearly six months before the CCPA, companies continue to fall even farther behind and need to secure their data,” said Varonis Field CTO Brian Vecci.

“Today, most CISOs assume that it’s just a matter of time before their security perimeter will be breached, which underscores the importance of data protection. The level of sensitive data exposure and oversubscribed access that most organizations are living with should set off alarm bells for corporate boards and shareholders.”


from Help Net Security http://bit.ly/2VxXkP8

CyberX’s first Cortex app to enable zero-trust strategies for OT networks

CyberX, the IIoT and industrial control system (ICS) security company, announced the availability of its “IIoT/ICS Asset Visibility & Threat Monitoring App” on Cortex – the industry’s only open and integrated AI-based continuous security platform.

Building on Cortex allows Palo Alto Networks Cortex partners to use normalized and stitched together data from customers’ entire enterprises to build cloud-based apps that constantly deliver innovative cybersecurity capabilities to joint customers.

As digitalization drives the deployment of billions of new Industrial Internet of Things (IIoT) devices along with pervasive connectivity between IT and OT networks, the attack surface is constantly expanding. Boards and management teams are increasingly concerned about the risk of costly production downtime and cyber-physical safety incidents from OT cyberattacks.

The new certified CyberX app is the first of its kind for securing OT networks. The integration of CyberX’s agentless platform with Cortex enables industrial and critical infrastructure organizations to implement zero-trust strategies for OT networks to stop the rapid spread of attacks.

Clients can now auto-discover and tag all managed and unmanaged IIoT/ICS devices to automatically define granular segmentation policies based on OT-specific device types, protocols, and behavior patterns.

Clients can also leverage CyberX’s continuous OT threat monitoring and IIoT/ICS threat intelligence feed — correlated with IT security events from Cortex Data Lake — to bring additional context, speed, and precision to threat investigation and threat hunting.

“The ROI benefit of CyberX’s app on Cortex is that it enables joint customers to collect and analyze network traffic data from Palo Alto Networks sensors they’ve already purchased and deployed while deploying CyberX as a cloud-based service. Customers can also choose to deploy CyberX as an on-premises solution, via physical or virtual appliances, integrated with Palo Alto Networks offerings,” said Amit Porat, Chief Architect at CyberX.

“We’re thrilled to be working with Palo Alto Networks to unify disparate data sources and apply machine learning to automatically detect and quickly respond to threats.”

“Cortex partners can leverage the vast amount of rich data available from across the enterprise to create AI-based innovations that provide more automated and accurate security outcomes to our joint customers,” said Karan Gupta, SVP of Engineering for Cortex at Palo Alto Networks.

“We’re proud to welcome CyberX to our expanding ecosystem of developers building innovative apps.”

Cortex is designed to radically simplify and significantly improve security outcomes. Deployed on a global, scalable public cloud platform, Cortex allows security teams to speed the analysis of massive data sets.

Cortex is enabled by Cortex Data Lake, where customers can securely and privately store and analyze large amounts of data normalized for advanced artificial intelligence and machine learning to find threats and orchestrate responses quickly.


from Help Net Security http://bit.ly/2XTCbfF

Cyleron launches a new user authentication and behavioral biometrics ML software platform

Cyleron, an artificial intelligence enabled software and services company announced the launch of its flagship cybersecurity software solution, CyleronCentry. CyleronCentry is a continuous user authentication & behavioral biometrics machine learning software platform.

Companies are under continuous digital assault by cybercriminals seeking to exploit any and all possible weaknesses whether in hardware, software, or end-user behavior. Unfortunately, the vast majority of data breaches are due to weak or stolen authentication credentials. The concept of the password for system authentication is inherently flawed and outdated.

Passwords can often be guessed, revealed from brute force techniques, stolen, shared, phished, etc. CyleronCentry is the next generation of end-user authentication and is continuous, highly specific to the individual, and accurate.

CyleronCentry uses advanced deep neural network techniques to process keyboard data and create a detailed behavioral-biometric profile to match the individual user. In the event an anomaly is detected, the administrator or system security professional is notified immediately via any method of their choosing for analysis.

“CyleronCentry’s artificial intelligence enhanced detection capabilities makes it a valuable and formidable addition to a company’s cybersecurity arsenal. By continuously authenticating users with their unique behavioral footprint, CyleronCentry forever changes the traditional password authentication landscape,” said Cyleron Inc. CEO, Susan Rebner.


from Help Net Security http://bit.ly/2GIK3JX

ImmuniWeb unveils free website security test

ImmuniWeb, a global provider of web, mobile and API security testing and risk ratings, expands its free community offering with a website security test.

Initially designed for SMEs and organizations with nascent application security testing programs, large organizations with mature DevSecOps programs can also benefit from the service to quickly run hundreds of daily scans ensuring essential security and compliance of external web applications.

Once launched, the test will:

  • Verify PCI DSS requirements 6.2, 6.5 and 6.6.
  • Fingerprint versions of over 100 most popular CMS, web frameworks and over 165,000 of their plugins.
  • Run a comprehensive vulnerability check for all known vulnerabilities in the fingerprinted software.
  • Check over 20 HTTP headers related to security, encryption or privacy for strong configurations in line with industry best practices, including ones from OWASP.
  • Assess Content Security Policy (CSP) to prevent some XSS and CSRF exploitation vectors, as well as variations of ransomware and Cryptojacking attacks.

Among almost 40 million public websites tested, only 9.74% contain up-to-date software, 2.07% satisfy the aforementioned PCI DSS requirements, and only 2.39% are protected with a WAF.

Ilia Kolochenko, CEO and Founder or ImmuniWeb, says: “Our free community offering enables our company to maintain sustainable relations with the community, get valuable feedback and actionable data on the global state of application security. We are excited to see a steadily growing number of users, many of whom later become commercial customers for our ImmuniWeb AI offering.”

The website security test is now also integrated with the freemium ImmuniWeb Discovery offering based on OSINT technology for non-intrusive discovery of an organizations external attack surface.

ImmuniWeb Discovery builds an inventory of an organizations external web, mobile and cloud assets, providing an ultimate asset visibility to organizations of all sizes.


from Help Net Security http://bit.ly/2ZKMaFZ

Immuta releases new automated data governance platform with compliant collaboration features

Immuta announced an industry first: No-code, automated governance features that enable business analysts and data scientists to securely share and collaborate with data, dashboards, and scripts without fear of violating data policy and industry regulations.

The Immuta platform ensures compliance with all major data regulations – including the General Data Protection Regulation (GDPR), The California Consumer Privacy Act (CCPA), and the Health Insurance Portability and Accountability Act (HIPAA).

According to Gartner’s Risk Management Leadership Council’s April 2019 Emerging Risks Monitor Report, “Accelerating Privacy Regulation” is the number one concern among risk, audit, and compliance executives. As defined by Gartner, “Accelerating Privacy Regulation” is “the risk of progressively complicated statutory regimes, which cover the use and protection of customer data, creating the potential for legal and financial exposure,” e.g. GDPR, CCPA, and HIPAA.

The Immuta Automated Data Governance Platform creates trust across security, legal, compliance and business teams so they can work together to ensure timely access to critical business data with minimal risks. Its automated, scalable, no code approach makes it easy for users across an organization to access the data they need on demand, while protecting privacy and enforcing regulatory policies on all data.

Automated Policy Inheritance

Historically, data policies have been managed at the application or system level. When analysts attempt to integrate data across many disparate systems, it can be difficult, if not impossible, to re-write these rules into a new, combined policy. This process is slow, complex, error-prone, and requires months of calls, emails, and meetings.

Immuta’s new Automated Policy Inheritance feature eliminates the need for human intervention to manage policies across mashed up data sources. Analysts can instantaneously and securely create integrated data sources, share derived data from them and collaborate across the organization – with confidence that the proper controls are in place.

Re-identification requests: On-demand de-masking of sensitive data

Gaining authorization for viewing regulated data – especially when it’s a single piece of data such as a name, email or phone number – can be confusing and cumbersome. Most organizations still use slow, poorly documented processes for sensitive data requests. This creates significant barriers for time sensitive, ‘need to know’ data access requests.

For example, imagine a situation where a researcher uncovers a medical misdiagnosis in a set of healthcare data. Knowing how to request and re-identify that person and their contact information, in a timely manner, without breaking the privacy of others, could save that person’s life while maintaining the highest of ethical standards.

To remove the complexity and confusion involved with consent, Immuta has introduced Format Preserving Encryption and Reversible Masking features so that users can immediately place a digital request for the data that they need to be “de-masked,” eliminating chaos and confusion.

Digital requests are instantaneously presented for authorization so that decisions on sensitive data access can be done quickly and securely – allowing an immediate, one-time re-identification for the requested data, or elevating the request for additional scrutiny.

Fingerprints: Capture the impact of data policy changes on downstream data users

When a data owner needs to make access and control policy changes to a data source, there has never been a way to demonstrate the impact those changes could have on the downstream use of that data. For example, statistical changes to data used within analytics or dashboards could have a major impact on the accuracy of a model or the integrity of business intelligence dashboards.

The new Immuta Fingerprints feature eliminates any uncertainty about how downstream use could be impacted. It calculates the impact of data policy changes and provides visualizations to users of the statistical deviation. Together, with the Immuta Policy Inheritance feature, downstream users are notified about any changes and are provided details on how their use of the impacted data will affect them.

Immuta also announced that its platform is now interoperable with the Databricks Spark analytics engine, and cloud-based data warehouses Google Big Query and Snowflake.

Steve Touw, Co-founder and Chief Technology Officer, Immuta said: “Immuta has delivered another industry first: workflow features that allow users to confidently and securely collaborate with sensitive data across their organization because they know, through the Immuta platform, that the rules will be enforced. They also have the freedom to make changes on the fly without any concern that privacy policies will be infringed upon. Also, if a policy needs to be changed, we’ve made it easy for the user to go through the workflow and adapt as needed.”


from Help Net Security http://bit.ly/2XYHg6z

Dryer Sheets Can Clean, Polish, and Make Almost Anything Smell Fresh


The humble dryer sheet may seem like the uni-tasker of the laundry room, but in reality, it can improve your life in myriad ways.

Instead of discarding used dryer sheets, use them as a heavy-duty wipe that won’t leave behind residue or bits of fluff. Used dryer sheets can clean computer screens, polish glasses, remove deodorant marks from clothes, and scrub soap scum.

Or if you happen to have extra fresh dryer sheets (and not enough laundry to do), you can use their static-fighting abilities to tame your flyaway hairs, freshen up smelly shoes and gym bags, or hide them around your car for a fresh scent.


from Lifehacker http://bit.ly/2GTb4Mg

Can Your Employer Fire You After You Quit?

Human ResourceAdvice for navigating the modern workplace. Send your career-related questions to humanresource@lifehacker.com.  

You give notice, but your employer declines your resignation and fires you on the spot. What happens to your unemployment benefits? Human Resource investigates.

Dear Human Resource,

Late last year, I gave my employer 60 days notice of my resignation, which would be effective January 1. Two days after I gave notice, my boss terminated me without cause and stating: “We do not accept your resignation.”

I’m in Massachusetts. I consulted with an employment lawyer here and was told I do not have grounds to sue for wrongful termination. Unemployment determined I was eligible for benefits, and accordingly I received benefits checks.

But after January 1, the checks stopped coming, because my old job told unemployment that I quit. Months later, this remains unresolved, because of a backlog of cases ahead of mine in the unemployment queue.

What recourse do I have? Nothing has changed in my employment situation; I didn’t keep working for them after they terminated me, and they didn’t accept my resignation, so how can they claim I quit a job they fired me from? How can I fight to get my unemployment benefits? My savings are dwindling and there’s no end to this situation in sight. Thank you for any recommendations you may have.

Research local laws regarding unemployment benefits

Your Human Resource is not a lawyer, so I had to speak to some experts to get a sense of the context here—in particular the Massachusetts context. State regulations can matter quite a bit in determining how unemployment benefits are actually administered. So a word to any reader in a situation like this in any state: Start by researching the local details.

The state unemployment agency in Massachusetts is the Department of Unemployment Assistance; you can visit online here. Given the circumstances you describe, you should have received a “notice of disqualification” when your benefits suddenly stopped.

Amy Epstein Gluck, a partner at FisherBroyles and an author of that firm’s useful employment law blog, tells me that this would have been the ideal moment to file an appeal with agency—you’re supposed to do so within ten days of receiving that notice. “If you haven’t received one yet, call and request it,” she adds. “You must continue to request benefits while your appeal is pending in order to receive payment for those weeks if you win.”

(Note that Ms. Epstein Gluck is not offering specific legal counsel here. It’s possible you may want to talk to a lawyer again; more on that below.)

But in the short term, you should appeal—even if you’re now (way) past the deadline, Massachusetts employment lawyer Jill Havens of Havens Law Office tells me. Offer whatever explanation explains the delay, and hope you can get a hearing to make your case.

Getting laid off vs. getting fired

In general, when one files for unemployment benefits, you are asked (among other things) whether you quit or were fired or were laid off or whatever the situation may be. Your former employer is asked similar questions. Benefits are doled out, or not, accordingly.

Ms. Havens (who, of course, is also not offering specific legal advice here) speculates that there are a couple of possibilities that may explain your situation. One is that your ex-employer took a while to respond (and, by your account, responded dishonestly).

The other possibility is, to me at least, more of a bummer. With 60 days notice, you announced you were quitting as of January 1. A day or two later, your employer gave you the boot. For those 58 or so days, you were eligible for unemployment benefits. “But as of January 1, you were going to quit anyway,” Ms. Havens says, channeling the agency’s possible thinking, “So as of that date—you’ve quit.” Ms. Havens says, channeling the agency’s possible thinking, “So as of that date, you’ve quit.”In a sense, your employer is trying to have it both ways.

Pursuing an appeal

Does that mean you simply lose your benefits? It’s definitely possible, but not certain. As Ms. Fisher notes, the unemployment agency should determine whether or not the reason you had for quitting would qualify you for continued benefits. That’s why you should absolutely pursue an appeal. And, perhaps, consider talking to an attorney again.

Note here that you have the right to view all materials related to your claim: possibly useful if you think your ex-employer made a false material statement of fact to this agency about the details of your unemployment, which could amount to fraud.

You mention having consulted with a lawyer earlier, but if you’re not dazzled by the advice you got and would like other options, check out the Massachusetts Employment Lawyers Association—there’s a membership directory listing scores of attorneys focused on representing workers (as opposed to employers).

The best thing you can do is move on

That said, I’m generally very cautious about suggesting full-on legal action if you have other options. By all means appeal to the agency, but I think you might be better off focusing most of your energy on finding your next gig, and leaving this experience behind. You do have some possible recourse strategies as outlined here—but they may not be very appealing, and they are certainly no guarantee.

But perhaps Lifehacker readers (lawyers and otherwise!) will have another take. Speak up in the comments if so!

Send your work-world questions to humanresource@lifehacker.com. Questions may be edited for length and clarity.

 


from Lifehacker http://bit.ly/2PGbus4