Chrome extension devs must drop deceptive installation tactics


After announcing its intention to limit third-party developers’ access to Chrome’s webRequest API, which is used by many ad-blocking extensions to filter out content, Google has followed up with announcements for a few more changes meant “to create stronger security, privacy, and performance guarantees”:

  • Chrome extension developers must ditch any deceptive installation tactic they have been using
  • Extensions must only request access to the appropriate data needed to implement their features
  • Extensions that handle user-provided content and personal communications must post privacy policies
  • Apps that use Google Drive APIs will be limited from broadly accessing content or data in Drive.

Preventing deceptive installation tactics

Extensions must be marketed responsibly, Google says, and from July 1 onwards, extensions that use deceptive installation tactics will be removed from the Chrome Web Store.

Such tactics include:

  • Unclear or inconspicuous disclosures on marketing collateral preceding the Chrome Web Store item listing.
  • Misleading interactive elements as part of the distribution flow (e.g., misleading call-to-action buttons, forms that imply an outcome other than the installation of an extension).
  • Adjusting the Chrome Web Store item listing window with the effect of withholding or hiding extension metadata from the user.

Chrome extensions, Drive API, and permissions

The “minimum permissions” policy, to be introduced in fall of 2019, will require extensions to demand only the narrowest set of permissions necessary to provide their existing services or features.

“Developers may use minimally-scoped optional permissions to further enhance the capabilities of the extension, but must not require users to agree to additional permissions. When an update requires additional permissions, end users will be prompted to accept them or disable the extension. This prompt notifies users that something has changed and gives them control over whether or not to accept this new use,” Google explained.

Developers that fail to comply with the policy will be removed from the Chrome Web Store, and non-compliant extensions will be disabled in end-users’ browsers.

Also, developers of extensions that handle user-generated content and personal communications must now publish one. It should include an explanation of what information is collected, how that information is used, and the circumstances in which it is shared. (This change is also announced for fall of 2019.)

Finally, as it previously did for Gmail, Google is making it so that Drive users get more control over what data third-party apps can access in their Drive.

“With this updated policy, we’ll limit apps that use Google Drive APIs from broadly accessing content or data in Drive. This means we’ll restrict third-party access to specific files and be verifying public apps that require broader access, such as backup services,” Ben Smith, Google Fellow and VP of Engineering, explained.

These changes will go into effect early next year and Google will start notifying impacted developers in the next few months.


from Help Net Security http://bit.ly/30WmJ4N

Thursday, May 30, 2019

Siemens LOGO!, a PLC for small automation projects, open to attack

LOGO!, a programmable logic controller (PLC) manufactured by Siemens, sports three vulnerabilities that could allow remote attackers to reconfigure the device, access project files, decrypt files, and access passwords.

Siemens LOGO vulnerabilities

About LOGO!

LOGO! is an intelligent logic module meant for small automation projects in industrial (control of compressors, conveyer belts, door control, etc.), office/commercial and home settings (lighting control, pool-related control tasks, access control, etc.).

It is deployed worldwide and can be controlled remotely.

About the vulnerabilities

The vulnerabilities, discovered and reported by Manuel Stotz and Matthias Deeg from German pentesting outfit SySS GmbH, are three:

  • CVE-2019-10919 – Missing authentication for critical functions (getting profile information containing sensitive data such as different configured passwords, setting passwords) which could allow the attacker to perform device reconfigurations and obtain project files.
  • CVE-2019-10920 – Use of hard-coded cryptographic key (the aforementioned configured passwords are, for example, encrypted with it).
  • CVE-2019-10921 – Storing passwords in a recoverable (cleartext) format (stored in the project).

All versions of Siemens LOGO!8 BM (basic module) are affected.

As confirmed by Siemens Siemens, all three vulnerabilities can be exploited by an unauthenticated attacker with network access to port 10005/tcp, with no user interaction.

“The LOGO!8 BM manual recommends protecting access to Port 10005/TCP,” ICS CERT noted. Siemens also advises implementing Defense-in-Depth, as outlined in the device system manual.

Two weeks ago, when Siemens released the advisory, there were no known public exploits for specifically targeting these vulnerabilities.

In the meantime, SySS researchers have published advisories (1, 2, 3) containing more details about the flaws, PoC exploit code (Nmap scripts), and a video demonstration of the attacks:


from Help Net Security http://bit.ly/2W2SpBP

Researchers fight ransomware attacks by leveraging properties of flash-based storage

Ransomware continues to pose a serious threat to organizations of all sizes. In a new paper, “Project Almanac: A Time-Traveling Solid State Drive,” University of Illinois students Chance Coats and Xiaohao Wang and Assistant Professor Jian Huang from the Coordinated Science Laboratory look at how they can use the commodity storage devices already in a computer, to save the files without having to pay the ransom.

researchers fight ransomware

Recovering data encrypted by a variety of ransomware families

“The paper explains how we leverage properties of flash-based storage that currently exist in most laptops, desktops, mobiles, and even IoT devices” said Coats, a graduate student in electrical and computer engineering (ECE). “The motivation was a class of malware called ransomware, where hackers will take your files, encrypt them, delete the unencrypted files and then demand money to give the files back.”

The flash-based, solid-state drives Coats mentioned are part of the storage system in most computers. When a file is modified on the computer, rather than getting rid of the old file version immediately, the solid-state drive saves the updated version to a new location.

Those old versions are the key to thwarting ransomware attacks. If there is an attack, the tool discussed in the paper can be used to revert to a previous version of the file. The tool would also help in the case of a user accidentally deleting one of their own files.

Like any new tool, there is a trade-off.

“When you want to write new data, it has to be saved to a free block, or block that has already been erased,” said Coats. “Normally a solid-state drive would delete old versions in an effort to erase blocks in advance, but because our drive is keeping the old versions intentionally, it may have to move the old versions before writing new ones.”

Coats described this as a trade-off between retention duration and storage performance. If the parameters of their new tool are set to maintain data for too long, old and unnecessary versions will be kept and take up space on the storage device. As the device fills with old file versions, the system takes longer to respond to typical storage requests and performance degrades.

On the other hand, if the parameters are set to a retention window that is too narrow, users would have a quicker response time, but they may not have all of their backup files saved should a malware attack take place.

To manage this trade-off, Huang and his students built in functionality for the tool to monitor and adjust these parameters dynamically. Despite the dynamic changes to system parameters, their tool guarantees data will be retained for at least three days. This allows users the option to backup their data onto other systems within the guaranteed time-period if they choose to do so.

The idea behind their tool has gained interest at an international level. The paper about this research was published at the EuroSys conference. Coats represented the group at the conference.

“Our research group really enjoys building practical computer systems; this is a great practice for our students, they will experience how our research will generate real-world impact,” said Huang, an assistant professor of electrical and computer engineering at Illinois. “Moving forward, our group will look at the possibility of retaining user data in a storage device for a much longer time with lower performance overhead, and applying the time-traveling solid-state drive to wider applications such as systems debugging and digital forensics.”


from Help Net Security http://bit.ly/2HM7C6k

New infosec products of the week: May 31, 2019

SailPoint Predictive Identity platform: The future of identity governance

SailPoint unveiled the SailPoint Predictive Identity platform, the intelligent cloud identity platform of the future that accelerates the industry to the next generation of identity governance. The solution automates identity processes using AI-driven recommendations while finding new areas of access and bringing them under governance with auto-discovery.

infosec products may 2019

Zyxel SD-WAN gets security, usability and speed boost

Zyxel SD-WAN provides a reliable and secure WAN through an annual software license that runs on ZyWALL VPN50, VPN100 and VPN300 firewalls. The license pack includes the Orchestrator Management interface, Dynamic Path Selection (DPS), WAN optimization, auto VPN, and zero-touch provisioning, as well as three security services: content filtering, application control and Geo Enforcer.

infosec products may 2019

StorageCraft ShadowXafe protects all data for midsize companies and MSPs irrespective of its source

StorageCraft comprehensively protects all data for midsize companies and MSPs irrespective of its source: whether on-premises or in the cloud, directly from VMware, or Hyper-V applications, and from both desktops and servers. For total disaster recovery and business continuity, StorageCraft ShadowXafe replicates data to a public cloud, a StorageCraft Cloud, or an off-premises location.

infosec products may 2019

Moogsoft AIOps 7.2 eases the burden of IT operations and DevOps teams

Moogsoft AIOps 7.2 features new capabilities that ease the burden of IT Operations and DevOps teams by optimizing service assurance. Significant new transparency, efficiency, and customization enhancements include: a new workflow engine, AI visualizations, performance dashboards, and new tool integrations.

infosec products may 2019

AD Enterprise: Perform end-to-end post-breach forensic investigations within a single tool

AccessData Group released AD Enterprise 7.1, a new version of its software for managing internal forensic investigations and post-breach analysis that contains first-to-market integration with cybersecurity platforms to automate the early stages of data collection. An API, which is available as an add-on option, enables a secure connection between a client’s cyber platform (e.g., Demisto, Phantom, etc.) and AD Enterprise.

infosec products may 2019

Bittium Tough Mobile 2: Smartphone with multilayered security structure

The core of the information security of the new Bittium Tough Mobile 2 is its multilayered security structure, which is based on the hardened Android 9 Pie operating system, unique hardware solutions, and the information security features and software integrated in the source code. The multilayered information security ensures that both the data stored in the device and data transfer are protected as effectively as possible.

infosec products may 2019


from Help Net Security http://bit.ly/2HMypiW

What mechanisms can help address today’s biggest cybersecurity challenges?

In this Help Net Security podcast, Syed Abdur Rahman, Director of Products with unified risk management provider Brinqa, talks about their risk centric knowledge-driven approach to cybersecurity problems like vulnerability management, application security and cloud and container security.

address cybersecurity challenges

Here’s a transcript of the podcast for your convenience.

Hi, my name is Syed Abdur and I’m the Director of Products at Brinqa, where I’m responsible for product management and technical product marketing.

Brinqa is a cyber risk management company based out of Austin, Texas. We pride ourselves in a unique risk centric knowledge-driven approach to cybersecurity problems like vulnerability management, application security and cloud and container security. We see these problems as a subset of a greater category of cyber risk management problems.

We’re really excited to see that our unique approach to these problems is really resonating well with the industry – both in terms of our customers, who represent some of the largest organizations in retail, healthcare, critical infrastructure and financial services verticals, as well as through awards. We recently won the Cyber Defense Magazine InfoSec Award for the best product in the vulnerability management category, as well as the Groundbreaking Company Award in the application security category awarded at RSAC this year.

To explain Brinqa’s product philosophy, I’m going to talk to you for a bit about a concept known as knowledge graphs, which aligns really well with the way we think about the information infrastructure necessary to address cybersecurity problems. Knowledge graphs are the information architecture behind Google search. You know how when you search for a term on Google, you see this index card on the right with a list of all the related terms about the specific thing that you searched for?

Google is able to present you these results at a moment’s notice because they have already built a gigantic knowledge graph of all the information in their database and more importantly, the relationships that exists between all of that information. Imagine if you had a similar tool for cybersecurity: a knowledge graph or a knowledge base for all your relevant cybersecurity information, with any dependencies and relationships necessary to answer a question, no more than a click away.

But do we think that this type of mechanism can help address some of the biggest challenges in cybersecurity today? For instance, can this help us make sure that cybersecurity decisions are based on complete accurate and up to date information? Can this help us ensure that we’re making the best use of our cybersecurity tools, budgets and resources? Can this help us determine if more tools and solutions are helping us or hurting us?

There are four key characteristics that define knowledge graphs. These are also rules that all Brinqa applications follow religiously.

The first rule is that it’s literally a graph. So, it’s literally a collection of nodes and relationships. The reason why this is important is that, if there is any relationship that exists between two pieces of information or facts, we are able to actually make use of it. In context of cybersecurity, this becomes really important because this helps us ensure that if there is any information that is relevant to our risk analysis, no matter where it exists in the organization, if it is related to the asset that we are analyzing, we can actually get to it and make use of it as part of our analysis.

OPIS

The second rule is that it’s semantic. Knowledge graphs follow a well-defined ontology, which means that they’re different from things like data lakes where there isn’t necessarily a strict structure and definition to the way that the data is being stored and represented. In the case of cybersecurity for instance, Brinqa has built our own cybersecurity data ontology, which essentially maps the relationships between all the different types of entities that you would want to monitor, but also things like vulnerabilities, and alerts, and notifications, and gaps and things like that. We are in complete control of how this information will be represented, once it comes in the knowledge graph.

The third rule is that it actually creates new knowledge. It’s not just a data stored where we’re dumping information. It’s actually a source of new knowledge creation by analyzing and working on the information that’s being populated into the knowledge graph. In our case, with cyber risk management applications, usually this is represented as risk ratings and risk scores that are the result of analysis that is being done on information that is coming in from other cybersecurity tools and products.

And then the fourth, and maybe the most important characteristic of knowledge graphs, is that they’re alive, which means that when we talk about the data ontology that defines the structure of information within a knowledge graph, it is completely dynamic, it is completely open to change. As your information is changing in the outside world, the knowledge graph essentially adapts to represent that information as accurately as possible.

We expect organizations to become more proactive and involved in the design and implementation of their cyber risk management programs, and this is really based on how we have seen our own solutions and ecosystem evolve and grow through our interactions with customers and prospects. Brinqa right now has more than 100 connectors to all types of cybersecurity IT and business data sources. I would say about 80 percent of these connected development requests originated from our customers and prospects. So, it’s very common for us to go into a deployment APAC with our standard set of connectors for any particular problem like vulnerability management or application security, and then get requests for entirely new connectors that the customer wants to make part of their cyber risk management model.

Once we are exposed to those connectors, they make a lot of sense and obviously we encourage all of our customers to consider building them into their cyber risk management process. But I think what this really drives home for us is the fact that risk management is a subjective exercise by nature. Your risk analysis has to really reflect who you are as an organization, and by giving organizations the tools to do that is where we see the most advancements and most emerging trends in cyber risk management.

For example, when we first started applying our platform to the problem of vulnerability management, we understood the standard set of data sources that we would need to integrate to accurately identify and address risks across network infrastructure, which is what vulnerability management was focused on for a really long time. We knew that this would typically include your vulnerability assessment tools, which are obviously the primary source of vulnerability information.

These tools are also used really extensively for asset discovery, so we knew that we would be getting asset information out of these tools. We also knew that most organizations have some other form of asset inventory, either as a CMDB or a dedicated asset inventory tool. And we knew that this would also be a source of valuable asset metadata on things like ownership, escalation chains, business impact, compliance requirements, data classification and so on. That was another obvious data source that we knew we would want to integrate to build a solution for vulnerability management.

Once we build this internal context, by looking at internal asset inventories and CMDB systems, and other asset discovery tools, we know that we also need to build the external context around the problem that we are solving.

Most organizations also have access to threat intelligence feeds which are a really good source of information about which vulnerabilities are most likely to be exploited, based on factors like: “Are there any known exploits for a vulnerability? Does a tool kit exist that makes use of this particular vulnerability? Are there any known threat actors that are utilizing a particular vulnerability as part of their attacks? Are we seeing a surge in chatter about a specific problem in the dark web?”

By providing a lot of additional external context around vulnerabilities threat intelligence feeds can also help us do a good job of prioritizing what needs to be fixed. By combining these three primary data sources, your vulnerability assessment tools, your asset context and your threat intelligence, we know that it’s enough information for us to take good decisions about what needs to be addressed. And then obviously, the end goal of vulnerability management is to actually reduce the risk, reduce the exposure to threats and the potential impact posed by these risks.

OPIS

We knew that we also want to integrate some ideas and systems, which is where primarily people keep track of user tasks and tickets for remediation. We knew that by combining these four primary data sources, vulnerability assessment tools, asset inventories, threat intelligence and of ideas and tools, we can actually build an end-to-end vulnerability management process, which is freely automated, going from the identification, analysis, prioritization of vulnerabilities to the actual creation of remediation tickets and the validation of remediation actions and the reporting of risk reduction and so on.

But once our customers started using these solutions, they started coming up with a lot of additional data points that made a lot of sense, and if available, should be integrated into these solutions. One of the first things that we started getting requests for was network administration tools.

If you think about it, one of the first security controls that we implement when we’re setting up a network is network segmentation. We set up segments based on what parts of the network need to process different types of information, what is more critical, are there any compliance requirements, things like that.

Since we have already built-in some risk information into our controls in the form of segmentation, it only makes sense that we should incorporate these as part of our risk analysis models. Similarly, endpoint protection systems was another really interesting integration, because if you think about it, with endpoint protection systems you can set up policies, not protect specific endpoints, against exploits using specific vulnerabilities.

Essentially, it provides some mitigating controls to the problems that exist on that on that endpoint. As we’re doing our risk analysis for the vulnerabilities that exist on that endpoint, it makes sense to look at whether any existing mitigating controls exist on that box. Also, by thinking about vulnerability management and really other cyber management problems as a problem of information architecture, we can imagine how it’s really easy to translate these types of solutions into other areas.

It’s very easy to go from vulnerability management for your network infrastructure to vulnerability management for application security. In that case we would be bringing in data from slightly different data sources, like your asset inventories would instead be application inventories, or code repositories. Instead of network assessment vulnerability assessment tools, we would be using static analysis, dynamic analysis, software composition analysis, penetration testing, results from these tools.

If we think about these problems as a problem of knowledge gathering and information architecture, we can see how it’s really easy to translate processes from one area of your infrastructure to another.


from Help Net Security http://bit.ly/2JMrJUf

Researchers spot manipulated photos and video using AI-driven imaging system

To thwart sophisticated methods of altering photos and video, researchers at the NYU Tandon School of Engineering have demonstrated an experimental technique to authenticate images throughout the entire pipeline, from acquisition to delivery, using artificial intelligence (AI).

spot manipulated photos and video

In tests, this prototype imaging pipeline increased the chances of detecting manipulation from approximately 45 percent to over 90 percent without sacrificing image quality.

Determining whether a photo or video is authentic is becoming increasingly problematic. Sophisticated techniques for altering photos and videos have become so accessible that so-called “deep fakes” — manipulated photos or videos that are remarkably convincing and often include celebrities or political figures — have become commonplace.

Paweł Korus, a research assistant professor in the Department of Computer Science and Engineering at NYU Tandon, pioneered this approach. It replaces the typical photo development pipeline with a neural network — one form of AI — that introduces carefully crafted artifacts directly into the image at the moment of image acquisition. These artifacts, akin to “digital watermarks,” are extremely sensitive to manipulation.

“Unlike previously used watermarking techniques, these AI-learned artifacts can reveal not only the existence of photo manipulations, but also their character,” Korus said.

The process is optimized for in-camera embedding and can survive image distortion applied by online photo sharing services.

The advantages of integrating such systems into cameras are clear.

“If the camera itself produces an image that is more sensitive to tampering, any adjustments will be detected with high probability,” said Nasir Memon, a professor of computer science and engineering at NYU Tandon and co-author, with Korus, of a paper detailing the technique. “These watermarks can survive post-processing; however, they’re quite fragile when it comes to modification: If you alter the image, the watermark breaks,” Memon said.

Most other attempts to determine image authenticity examine only the end product — a notoriously difficult undertaking.

Korus and Memon, by contrast, reasoned that modern digital imaging already relies on machine learning. Every photo taken on a smartphone undergoes near-instantaneous processing to adjust for low light and to stabilize images, both of which take place courtesy of onboard AI.

In the coming years, AI-driven processes are likely to fully replace the traditional digital imaging pipelines. As this transition takes place, Memon said that “we have the opportunity to dramatically change the capabilities of next-generation devices when it comes to image integrity and authentication. Imaging pipelines that are optimized for forensics could help restore an element of trust in areas where the line between real and fake can be difficult to draw with confidence.”

Korus and Memon note that while their approach shows promise in testing, additional work is needed to refine the system. This solution is open-source and can be accessed at GitHub.


from Help Net Security http://bit.ly/2wu51rt

StorageCraft ShadowXafe protects all data for midsize companies and MSPs irrespective of its source

StorageCraft, whose mission is to protect all data and ensure its constant availability, announced a powerful upgrade and expansion of its flagship product, StorageCraft ShadowXafe.

StorageCraft ShadowXafe

The solution now provides enhanced features for Managed Service Providers (MSPs), including data monitoring, protection, and recovery for the entire data center, independent of size and type of machine, from a single console. It eliminates complexity, improves productivity, and reduces pressure on IT skills and training. ShadowXafe outperforms competitive offerings by multiple orders of magnitude and delivers immediate business impact.

Key enhancements of the new ShadowXafe version include:

  • MSP-friendly, flexible billing and invoicing for improved productivity
  • Network tunneling for massive, yet simple scale of new and expanded customer deployments
  • Support for Hyper-V applications.

For MSPs, standardizing on StorageCraft ShadowXafe offers a path to improved productivity, profitability, and business success. It saves MSPs hours during deployment and management, takes milliseconds for recovery of data, and reduces the potential for errors. Its scalability lets MSPs add additional data protection and workflows in a few clicks.

By supporting a virtually unlimited number of nodes, it allows MSPs to grow their customer base without increasing service desk staff. ShadowXafe delivers everything through a single license and a single, usage-based, recurring or combination billing system. StorageCraft ShadowXafe supports consolidated automated billing through PSA and RMM partners.

ShadowXafe’s native tunneling feature allows for the management of thousands of devices with minimal impact on network performance and without having to reconfigure firewalls. Organizations can now scale at speed and with ease because ShadowXafe can restore any device or an entire IT infrastructure from a single console and via a single pane of glass.

With uncompromising reliability, speed and simplicity, StorageCraft comprehensively protects all data for midsize companies and MSPs irrespective of its source: whether on-premises or in the cloud, directly from VMware, or Hyper-V applications, and from both desktops and servers. For total disaster recovery and business continuity, StorageCraft ShadowXafe simply and effortlessly replicates data to a public cloud, a StorageCraft Cloud, or an off-premises location.

By supporting agentless and agent-based protection, StorageCraft ShadowXafe satisfies SLA requirements for performance-intensive VMs and physical machines. It delivers protection for multiple use cases – including physical and virtual servers, on-premises and cloud, and DRaaS, in a single solution. StorageCraft ShadowXafe recovers virtual machines (VMs) in milliseconds, outperforming competitive offerings by multiple orders of magnitude. It recovers and restores entire IT infrastructures in minutes and, with a few clicks, delivers total business continuity with a complete and orchestrated virtual failover to the cloud. Exceedingly powerful and flexible, StorageCraft ShadowXafe is 20x faster to deploy than any competitive offering.

Because StorageCraft ShadowXafe features a modern architecture of microservices which provide a simple, scalable, and flexible workflow, it makes managing and restoring VMs and physical machines a breeze. In addition, it offers seamless integration into DRaaS for failover and recovery. In the event of system-wide failure, data corruption or natural disaster, ShadowXafe’s patented VirtualBoot technology allows organizations to perform a virtual machine recovery in milliseconds and restore entire infrastructures in minutes.

With a broad array of features and capabilities and powerful, differentiated functionality at SME pricing, StorageCraft ShadowXafe is an ideal solution for SMEs and MSPs of all sizes and levels of complexity.


from Help Net Security http://bit.ly/2W5ISd3

Cymulate launches new Advanced Persistent Threat simulation

Cymulate, the most comprehensive, on-demand SaaS-based Breach and Attack Simulation (BAS) platform, launched its new Advanced Persistent Threat (APT) simulation.

The new simulation vector enables companies to simulate a full-scale APT attack on their network with a click of a button, challenging security control mechanisms through the entire cyber kill chain, from pre-exploitation (Reconnaissance, Weaponization and Delivery) into exploitation, and even post-exploitation activities such as Command and Control (C&C) communication and data exfiltration.

The APT simulation vector also tests security controls against the very latest threats circulating in the wild. According to Cymulate’s Research Laboratory, 67% of these threats pose an immediate risk to organizations. Over 200 companies worldwide across 11 different industries took part in the research, revealing energy, consulting and airline verticals as being the least prepared for immediate threats.

Simulations of high profile variants, performed since the beginning of 2019, showed that:

  • 40% of organizations were at risk from the Dridex Trojan
  • 26% of organizations were at risk from an Emotet variant that serves the Trickbot malware
  • 38% of organizations were at risk from malware launched by the North Korean group Hidden Cobra
  • 33% of organizations were at risk from the Ryuk ransomware.

Unlike rival solutions, Cymulate’s Full Kill-Chain APT simulation vector is comprehensive and highly customizable, providing a sweeping overview of potential exposures including email, web gateway, phishing, endpoint, lateral movements and data exfiltration. The platform also uses unique algorithms to predict potential future APT attacks, and proactively simulating them to offer appropriate detection and mitigation insights.

“Cymulate’s APT Simulation vector is the most thorough means to measure a company’s true security posture, which is vital when hackers are probing for security gaps and adapting to new defenses continuously,” said Eyal Wachsman, Cymulate’s co-founder and CEO. “For that reason, the Full Kill-Chain APT Simulation vector enables a full campaign of a cyber-attack kill chain to be simulated, just as it would be done by a real hacker, making this a critical tool in every security team’s arsenal.”

Cymulate’s SaaS-based platform enables organizations to automatically assess and improve their overall security posture in minutes by continuously testing defenses against variety of attack vectors and APT attack configurations. Simulations, which can be run on-demand, or scheduled to run every day, week or month, provide specific actionable insights and data on where the company is vulnerable and how to amend the security gaps.

Cymulate was founded in June 2016 by cybersecurity veterans Eyal Wachsman and Avihai Ben-Yossef, alongside Eyal Gruner, a serial entrepreneur and investor in cybersecurity startups. Cymulate is successfully active, supporting customers with its technology across all industry verticals, with customers in North America, Europe, Asia, and Australia.


from Help Net Security http://bit.ly/2JPVAuP

HiveIO delivers data center intelligence with Hive Fabric 7.3

HiveIO, a company that transforms commodity data center equipment into an intelligent virtualization platform, released version 7.3 of Hive Fabric, an Artificial Intelligence-ready fabric solution that enables organizations to deploy virtualization technology without the need for vendor complexity or specialists.

The latest software release provides Hive Fabric users with increased operational capabilities to further reduce the time needed to support a virtualization environment while also maximizing the performance, capacity, and spend on existing infrastructure.

“Hive Fabric was developed with IT professionals in mind, helping them withstand common industry pain points like flexibility and usability,” said Dan Newton, CEO of HiveIO. “The solution has helped IT in a variety of industries exceed their business goals by creating a virtualization solution that works with users, not against them. We’re continuing to grow with a user-first mindset, and the launch of 7.3 delivers the new capabilities based directly on feedback and needs of current Hive Fabric users.”

Hive Fabric combines KVM hypervisor, software-defined storage (SDS) and networking, and virtual desktop management, into an all-in-one virtualization solution, eliminating the need for a multi-vendor, multi-contract approach. The new features within the 7.3 solution include:

Graphics acceleration: The rise in augmented and virtual reality has increased the need for graphics acceleration. To seamlessly improve the performance of virtual machines (VMs), administrators can now install graphics processing units (GPUs) inside of Hive Fabric-enabled servers and then simply turn the acceleration on or off with a single click. Graphics acceleration is available via GPU Sharing or GPU Passthrough and supports NVIDIA, ATI, and Intel.

Software-Defined Networking (SDN): Flexible networking is key to delivering a fully virtualized data center. With ethernet consolidating and the speed increasing, a need for IT Administrators to separate traffic and guarantee bandwidth for desktops and applications is becoming a necessity. Administrators can now add multiple physical and virtual SDNs giving them the flexibility to fit with any network architecture.

Configurable in-memory storage: Balancing business requirements and the cost of infrastructure is challenging for any IT team. Memory is the most scarce, highest-cost resource in the data center and a key to meeting competing business objectives. The SDS capability extends to managing server memory, allowing it to be allocated to either storage or memory for virtual machines, with differing allocations possible on every server.

Hive Sense: The comprehensive simplicity of setting up and running Hive Fabric extends to HiveIO Support. Introduced in 7.3, Hive Sense will allow HiveIO to proactively support customers by sending logs, metrics, and configuration information back to the company. This reduces the time needed to collect logs or understand how the infrastructure is deployed, so support engineers can resolve issues faster and remove the burden from your IT administrators.

Unlike legacy platforms that require specialists to operate overly complicated systems, Hive Fabric utilizes an Intelligent Message Bus and intuitive user interface (UI) to show an all-encompassing view of a data center and its connected components in real time. This makes it easy for administrators to find and act upon vital information and reduce downtime.

“The Hive Fabric UI, coupled with the easy-to-use enhancements in 7.3, empowers administrators of all skill levels to manage the entire data center,” said Toby Coleridge, Vice President of Product at HiveIO. “Organizations can reallocate their highly-skilled specialists to other areas of the business to drive innovation rather than be bogged down with daily administrative tasks.”


from Help Net Security http://bit.ly/30TpmnU

Ricoh searches terabytes of global IT logs in real time with Elasticsearch

Ricoh is operationalizing the Elastic Stack to visualize and monitor two terabytes of logging data a day to watch for and react quickly to security threats across its global IT infrastructure.

Prior to implementing the Elastic Stack, Ricoh’s infrastructure surveillance system wasn’t able to instantly link and detect anomalous events from the Internet all the way through to the endpoint. This was exposed during the WannaCry ransomware attack, which prompted Ricoh to issue several security patches for its product and to leverage the Elastic Stack as the foundation of its new Security Control Department. This included Ricoh building a security analytics solution using Elastic’s open source products (Elasticsearch, Kibana, Beats, Logstash), Elastic’s proprietary features like monitoring and alerting, and support from Elastic engineers.

Today, logs from nearly all of Ricoh’s IT devices are monitored and visualized in real time with Kibana on a large screen in Ricoh’s Security Control Department, which is responsible for securing Ricoh’s operational infrastructure.

“After introducing the Elastic Stack, our Security Control Department was able to better prevent, detect and respond promptly to the ever-changing landscape of global security threats, both internally and externally,” said Mr. Tomotake Wakuri, Senior Specialist, ICT Business Group, Ricoh. “We look forward to working with Elastic as they continue to build new and more powerful features and solutions for the security analytics use case.”

“It is humbling to see that Ricoh has adopted the Elastic Stack to visualize, search and alert for security threats across their global IT infrastructure,” said Shay Banon, CEO and founder of Elastic. “The security use case is a global phenomenon that cuts across networks from Tokyo, to New York, to London. We are excited that Ricoh has decided to partner with us to create a security solution that spans the globe.”


from Help Net Security http://bit.ly/2KbBSJD

Get Modern-Day Myst Successor 'Obduction' for Free Now


Get ready for the weekend by picking a cool video game for the low, low price of... nothing. GOG.com, the site formerly known as Good Ol’ Games, is giving away 2016 puzzle game Obduction for free as part of its “Summer Sale Festival.” The game is only available today, May 30, 2019, and tomorrow, May 31, 2019, so I recommend you take a second and grab it real quick before time runs out.

If you haven’t heard of it, Obduction is a puzzle game from developer Cyan, which created the popular PC game Myst. While it isn’t exactly the same, Obduction modernizes the strange, reality-bending ideas in Myst and its sequel, Riven, so if you remember playing and enjoy those games (or just thinking they were a trip), it might be worth popping in and reliving those 1990’s gaming glory days. Kotaku’s Nathan Grayson said he was “very impressed” with the game’s opening puzzles. You can check out his playthrough of the first 15 minutes if you want to see the game in action.

If you’re interested in picking up the game, go to the giveaway page on GOG, scroll down to the Obduction banner, and click the button that says, “Get it Free.” You will need to sign into your Gog account to get the game—if you don’t have one, you will be prompted to sign up or sign in with Facebook. (Personally, I’d say make a new account rather than tying more data to Facebook, but the choice is yours). Once you’ve logged in, Obduction will be added to your GOG games list, which you can find by going to the drop-down under your profile name and selecting “Games.”

Unlike Steam or the Epic Games Store, you do not need GOG’s game launcher software, GOG Galaxy, to download and play games purchased through the GOG store. If you want to download the game on its own, go to Obduction’s download page in your games list and scroll down to “Offline Backup Game Installers” and click on the file name. Note that, if you download the game that way, you will need to manually update the game when and if the developer releases new patches.


from Lifehacker http://bit.ly/2W42nmy

Fraudulent Academic Papers

The term "fake news" has lost much of its meaning, but it describes a real and dangerous Internet trend. Because it's hard for many people to differentiate a real news site from a fraudulent one, they can be hoodwinked by fictitious news stories pretending to be real. The result is that otherwise reasonable people believe lies.

The trends fostering fake news are more general, though, and we need to start thinking about how it could affect different areas of our lives. In particular, I worry about how it will affect academia. In addition to fake news, I worry about fake research.

An example of this seems to have happened recently in the cryptography field. SIMON is a block cipher designed by the National Security Agency (NSA) and made public in 2013. It's a general design optimized for hardware implementation, with a variety of block sizes and key lengths. Academic cryptanalysts have been trying to break the cipher since then, with some pretty good results, although the NSA's specified parameters are still immune to attack. Last week, a paper appeared on the International Association for Cryptologic Research (IACR) ePrint archive purporting to demonstrate a much more effective break of SIMON, one that would affect actual implementations. The paper was sufficiently weird, the authors sufficiently unknown and the details of the attack sufficiently absent, that the editors took it down a few days later. No harm done in the end.

In recent years, there has been a push to speed up the process of disseminating research results. Instead of the laborious process of academic publication, researchers have turned to faster online publishing processes, preprint servers, and simply posting research results. The IACR ePrint archive is one of those alternatives. This has all sorts of benefits, but one of the casualties is the process of peer review. As flawed as that process is, it does help ensure the accuracy of results. (Of course, bad papers can still make it through the process. We're still dealing with the aftermath of a flawed, and now retracted, Lancet paper linking vaccines with autism.)

Like the news business, academic publishing is subject to abuse. We can only speculate the motivations of the three people who are listed as authors on the SIMON paper, but you can easily imagine better-executed and more nefarious scenarios. In a world of competitive research, one group might publish a fake result to throw other researchers off the trail. It might be a company trying to gain an advantage over a potential competitor, or even a country trying to gain an advantage over another country.

Reverting to a slower and more accurate system isn't the answer; the world is just moving too fast for that. We need to recognize that fictitious research results can now easily be injected into our academic publication system, and tune our skepticism meters accordingly.

This essay previously appeared on Lawfare.com.


from Schneier on Security http://bit.ly/2KdpEjK

Attackers are exploiting WordPress plugin flaw to inject malicious scripts

Attackers are leveraging an easily exploitable bug in the popular WP Live Chat Support plugin to inject a malicious JavaScript in vulnerable sites, Zscaler warns.

WP Live Chat Support

The company has discovered 47 affected sites (some have been cleaned up in the meantime) but that number is unlikely to be final.

The source of the compromise

The stored cross-site script vulnerability vulnerability the attackers are exploiting was discovered by Sucuri researchers earlier this year and the plugin developers pushed out a security update fixing it on May 15.

“The vulnerability allows an unauthenticated attacker to update the plugin settings by calling an unprotected “admin_init hook” and injecting malicious JavaScript code everywhere on the site where Live Chat Support appears,” Zscaler researcher Prakhar Shrotriya noted.

The injected script sends a request to an attacker-owned domain to execute the main script, which triggers a redirection through multiple URLs, which show unwanted popup ads and fake error messages.

Other sources tell of other spam sites users are redirected to.

Double trouble

WP Live Chat Support is one of the most popular WordPress chat plugins, with over 50,000 active installations and, as such, has great potential for attackers.

They have been able to recently exploit another flaw in the plugin, which allowed them to upload arbitrary malicious files to vulnerable systems. Judging by the comments left by some users, the initial patches for that vulnerability were apparently not effective.

Users are advised to update the plugin to the latest offered version (8.0.32), but they may choose to disable it altogether until they get confirmation that all the patches work (I’ve asked and am waiting for the response). They are also urged to clean up their site’s code to remove the offending scripts.

“Cybercriminals actively look for new vulnerabilities in popular content management systems such as WordPress and Drupal, as well as popular the plugins that are found in many websites. An unpatched vulnerability in either the CMS or associated plugins provides an entry point for attackers to compromise the website by injecting malicious code and impacting the unsuspecting users visiting these sites,” Shrotriya pointed out and urged users to keep their installations up-to-date.


from Help Net Security http://bit.ly/2YXvNEM

Insight Partners acquires Recorded Future for $780 million

Insight Partners has agreed to acquire a controlling interest in Recorded Future. The all-cash transaction values Recorded Future at more than $780 million and will accelerate the next phase of the company’s global growth and expansion.

Insight Partners acquires Recorded Future

Today, Recorded Future is the largest privately-held threat intelligence software company in the world, with more than 400 clients and adding hundreds of new clients every year across all geographies and sectors onto its SaaS platform.

The company has seen tremendous organic growth over the last 10 years as the threat intelligence market has continued to expand rapidly, with a direct impact on adjacent categories such as security operations, vulnerability management and third-party risk.

“Insight’s renewed investment is a testament to the vision and direction laid out by Recorded Future’s leadership team. They envision a world where everyone applies intelligence at speed and scale to reduce risk, remaining hyper-focused on providing clients with the threat intelligence necessary to understand their environments, manage risk, and combat malicious actors through contemporary awareness gained from the implementation of a threat intelligence-led security strategy,” said Mike Triplett, managing director at Insight.

According to CEO of Recorded Future, “This is a truly exciting time for the Recorded Future family of employees, clients, and partners. I am particularly grateful to the exceptional women and men who have worked tirelessly and with so much self-sacrifice for many years to build the amazing company and solution that we have today. The renewed investment in Recorded Future by Insight validates our hard work. This partnership lays the foundation to take our products and software to the next level to best serve our clients, changing the face of our industry as we drive an intelligence-led strategy to help reduce risk and enable business operations for clients around the globe. The best is still to come for our company and our clients, and it is going to be an awesome ride as we build upon our joint successes!”

“As a market leader, Recorded Future provides its services to the greatest number of large enterprises of any industry competitor, offering the best customer care and highest Net Promoter Score,” said Insight Vice President Thomas Krane. “By doubling down on this long-term partnership, Insight looks forward to continuing our work in a high-growth market with this strong team of leaders — an asset that is core to our DNA at Insight.”

Pursuant to the terms of this investment, Insight’s Mike Triplett and Thomas Krane will join Recorded Future’s board of directors.
Chris Pasko and Ivan Brockman of PJT Partners served as advisors for Recorded Future.


from Help Net Security http://bit.ly/2EIYKg1

G Suite to get Gmail confidential mode, on by default

Earlier this year, Google introduced Gmail confidential mode for both consumer and G Suite users. While the former were able to use it immediately, the latter depended on whether their domain admin chose to enable it (as it was and is still in beta).

But, starting on June 25, the feature will be turned on by default and it will be on admins to turn it off – if they don’t explicitly choose to disable it before that date.

How does Gmail confidential mode work?

Confidential emails are self-destructing and/or protected by passwords, and impossible to forward, copy, download or print. They can also be revoked.

Gmail confidential mode

“When a user sends a confidential message, Gmail replaces the message body and attachments with a link. Only the subject and body containing the link are sent via SMTP,” Google explained.

“This means that if your users send or receive messages in Gmail confidential mode, Vault will retain, preserve, search and export confidential mode messages. The message body of received messages will be accessible in Vault only if the sender of the message is from within your organization.”

Gmail clients make the linked content appear as if it’s part of the message, but third-party mail clients display a link in place of the content.

Options and warnings

G Suite administrators can:

  • Disable or enable Gmail confidential mode for their entire domain or for specific organizational units (users can still receive messages in confidential mode)
  • Block all incoming messages in confidential mode by setting up a compliance rule
  • Define rules to handle confidential mode messages.

Google also warns that the feature might be incompatible with organizations’ eDiscovery and retention obligations if their domain uses third-party eDiscovery or archiving tools.

“Before enabling this feature, we recommend you discuss the impact with your eDiscovery administrators and other policymakers,” they advise.

Finally, Google made sure to point out that recipients can still take screenshots or photos of emails sent in confidential mode, and that malware may be able to copy or download them emails or the attachments in them.


from Help Net Security http://bit.ly/2WevAzX

Wednesday, May 29, 2019

A veteran’s look at the cybersecurity industry and the problems that need solving

For many in the infosec industry, Daniel Miessler needs no introduction, as he’s a 20-year industry veteran, a professional that fulfilled a variety of security roles at companies like HP and IOActive, a leader of the OWASP IoT Security Project and, most prominently, the author of the popular Unsupervised Learning podcast, newsletter and blog.

Daniel Miessler

Apart from effectively curating and summarizing content produced by others, Miessler is also the source of interesting ideas and occasionally unorthodox opinions such as, for example, that we have exactly the right amount of software security given how high we prioritize it compared to building features and expanding business.

“If we were losing a lot more money, or lots of people were getting hurt or killed, security would improve overnight. That isn’t happening because the security we currently have is mostly good enough,” he told Help Net Security.

But, he believes, that status quo could change soon due to the emerging IoT proliferation, blossoming privacy challenges and the general digitization of more and more of our lives.

“Once insecurity starts colliding with our ability to run successful businesses – in a real way, not just being an annoyance – and/or people start getting hurt, that’s when we’ll see a combination of regulation and laser focus on security from industry,” he opined.

Current industry problems

Despite the fact that the information security industry has been developing for the past few decades, it is still in the “wizardry and alchemy” phase, and that’s why, according to Miessler, sales are still linked to disasters.

Breaches and hacking are still wild and mysterious and scary to business leaders, he says. Fear causes emotional reactions and, when business people get scared, they open their wallets. Still, most of those in the industry would genuinely like to see organizations adopting sound security practices and cybercriminals getting the shorter end of the stick.

“As industries mature they become more boring – like accounting, or insurance. That’s ironically the goal of security: to be able to translate every decision into a tradeoff between cost of control and cost of impact. Right now we’re nowhere close to this – we’re still a bunch of wizards trying to have an accounting conversation,” he noted.

“The bigger problem is that we don’t have a common language that bridges infosec and business, since security people can’t quantify their risk as money, and business people ultimately see everthing in those terms. This is why people who can translate between the two are in such demand.”

Yet another problem that needs solving as soon as possible is how to find and hire the right talent for cybersecurity roles.

The problem is caused by a dearth of entry-level cybersecurity positions and bad hiring processes by most companies, Miessler believes.

“The skills required to do even an introductory level position in infosec are significant. If you don’t have some foundation in system administration, networking, or programming – or some other practical experience related to security – you can be repeatedly passed over for positions,” he noted.

“The best thing you can possibly do to get into security is figure out the exact skills that employers are looking for and come to the interview being somewhat functional in one or more of them. Employers want to know what you can do immediately, because they don’t have the time or the risk tolerance to train someone new and potentially find out they’re not a good fit. You have to be useful on day one.”

This is why university interns and bug-bounty people have such a major advantage in the market, he says – they come into conversations having already done projects, seen the real world, so even if they’re not very advanced, they’re still functional immediately.

Employers, on the other hand, can start by concentrating less on filtering and hiring techniques that were used in the past (e.g., degrees) and more on verifying and confirming that the candidate can build something, code, or solve problems.

Future cybersecurity industry problems

As noted earlier, one of the key challenges he expects the infosec industry will have to tackle in the next five or so years is privacy. Fuelled by the rise of IoT and wearables, data about all of use will become a primary currency in our economy, he believes.

The problem of data stolen and misused by cyber criminals is just a small part of the problem – the real problem is data brokers, who systematically organize and sell our data in a way no criminal can, he notes.

“If we want to address privacy properly we need to look at the ‘legitimate’ business models that are built around doing precisely the opposite of what most consumers want done with their data,” he added.

Another big challenge for the industry is staying relevant.

“If security groups can’t stop breaches, the public doesn’t stop doing business with a company when it gets compromised, and the business can gain protection through insurance, they might very well switch their efforts towards insurance-based protection,” he explained.

“In that world, the business asks the insurance companies what they should do, since they’ll be the ones with the best data on how to protect things. We’re not there yet, but I think the future has much of the infosec world working within the context of insurance. Infosec’s biggest problem is not being able to make data-based decisions on what to do to reduce the most risk, and insurance companies are the best situated from an incentive and business standpoint to collect and make use of that data.”


from Help Net Security http://bit.ly/2Wz5d7h

Majority of CISOs plan to ask for an increase in cybersecurity investment

Most CISOs of financial institutions (73 percent) plan to ask their organization’s CFO for an increase in cybersecurity investments in the next year, according to the Financial Services Information Sharing and Analysis Center (FS-ISAC), an industry consortium dedicated to reducing cyber-risk in the global financial system.

increase cybersecurity investment

“The advancement and adoption of new technologies coupled with increased geopolitical tension has fueled a rapidly evolving cyber threat landscape,” said Steve Silberstein, CEO of FS-ISAC. “An effective cybersecurity program needs to adapt to this environment and funding must be deemed as a cross-functional investment.”

The survey also found that 56 percent of the respondents said 10 percent or less of their organization’s overall budget is dedicated to cybersecurity. Of that 10 percent, a majority (54 percent) cited IT infrastructure and asset management as the area that receives the most funding.

The three areas that receive the least amount of funding include employee training and education (four percent), vendor management (six percent) and business continuity (nine percent).

“Institutions are now finding vulnerabilities across other functions of the business with employees and third-party vendors becoming areas of increasing concern,” said Silberstein. “A holistic approach to cyber is critical to mitigate current and long-term risks.

Additional key findings

  • Of the total cybersecurity budget, 27 percent of respondents surveyed cited regulatory requirements (14 percent) and risk management and governance (13 percent) as areas that receive the most funding within their organization.
  • Seventy-one percent said their organization’s incident response plan is tested across the organization once a year, compared to 29 percent that reported that it is only tested within the IT environment once a year.
  • Seventeen percent of respondents said they also plan to ask to expand their cybersecurity budget by 2021.

from Help Net Security http://bit.ly/2EIuGBb

Security overconfidence and immaturity continue to endanger organizations

The majority of organizations are ill-prepared to protect themselves against privileged access abuse, the leading cyber-attack vector, according to Centrify and Techvangelism.

security overconfidence

Seventy-nine percent of organizations do not have a mature approach to Privileged Access Management (PAM), yet 93% believe they are at least somewhat prepared against threats that involve privileged credentials.

This overconfidence and immaturity are underscored by 52% of organizations surveyed stating they do not use a password vault, indicating that the majority of companies are not taking even the simplest measures to reduce risk and secure access to sensitive data and critical infrastructure.

The survey of 1,300 organizations across 11 industry verticals in the U.S. and Canada reveals that most organizations are fairly unsophisticated and still taking Privileged Access Management approaches that would best be described as “Nonexistent” (43%) or “Vault-centric” (21%).

More sophisticated organizations take an “Identity-Centric” (15%) approach that tries to limit shared and local privileged accounts, replacing them with centralized identity management and authentication with an enterprise directory.

The most protected organizations are considered “Mature” (21%) because they address PAM by going beyond vault- and even identity-centric techniques by hardening their environment further via a number of initiatives (e.g., centralized management of service and app accounts and enforcing host-based session, file, and process auditing).

“This survey indicates that there is still a long way to go for most organizations to protect their critical infrastructure and data with mature Privileged Access Management approaches based on Zero Trust,” said Tim Steinkopf, CEO of Centrify. “We know that 74% of data breaches involve privileged access abuse, so the overconfidence these organizations exhibit in their ability to stop them from happening is concerning. A cloud-ready Zero Trust Privilege approach verifies who is requesting access, the context of the request, and the risk of the access environment to secure modern attack surfaces, now and in the future.”

The survey also revealed some specific insights about the solutions being used to control privileged access, including:

  • 52% of organizations are using shared accounts for controlling privileged access.
  • 58% of organizations do not use Multi-Factor Authentication (MFA) for privileged administrative access to servers.
  • 51% of organizations do not control access to transformational technologies with privileged access, including modern attack surfaces such as cloud workloads (38%), Big Data projects (65%), and containers (50%).

Looking at organizations’ PAM maturity by industry, some surprises emerged:

  • 39% of Technology organizations have a Nonexistent approach to PAM.
  • Two highly-regulated industries, Healthcare (45%) and Government (42%), also scored high for Nonexistent PAM maturity.
  • Finance (27%) unsurprisingly scored highest in the Mature category, followed by Energy/Utilities (26%), and then Technology (25%), as well as Healthcare (22%).
  • Professional Services is taking a highly Vault-Centric approach to PAM at 29% of organizations.

from Help Net Security http://bit.ly/2XdXatK

New initiative aims to strengthen IoT security, interoperability and reliability

The Zigbee Alliance publicly announced a major ongoing initiative to make smart home and IoT products easier to develop, deploy, and sell across ecosystems.

Zigbee Alliance All Hubs Initiative

The All Hubs Initiative is driven by a Zigbee Alliance workgroup comprised of leading IoT companies including Amazon, Comcast, Exegin, Kwikset, Landis+Gyr, LEEDARSON, Legrand, MMB Networks, NXP, OSRAM, Schneider Electric, Silicon Labs, Somfy, and many others with the goal of improving interoperability between IoT devices and major consumer and commercial platforms.

The product of this effort is a set of features at the application and network layers of Zigbee that will be incorporated into the upcoming 3.1 version of Zigbee technology.

“Consumers and businesses want connected devices that offer value and convenience, work great, and work together seamlessly,” said Chris DeCenzo, chair of the All Hubs Initiative workgroup, board director of the Zigbee Alliance, and principal engineer at Amazon. “Through the All Hubs Initiative, leading IoT companies in the Zigbee Alliance are working together to define interoperability standards to help device makers innovate and expand selection while continuing to deliver consistent, reliable experiences for customers.”

Meeting the evolving needs of major ecosystems

IoT and smart home ecosystems can vary in their supported features, business models, value propositions, customer experience expectations, security requirements, and other factors. This is the nature of the openness and innovation of the IoT, and the Zigbee standard was designed to support this flexibility across offerings from Amazon, Samsung SmartThings, Philips Hue, IKEA, and others.

However, this flexibility can sometimes create challenges for device vendors trying to build and market products that meet the requirements of different ecosystems and earn their coveted “Works With” badges – and challenges for businesses and customers using those products across a number of hubs.

“As innovation across the IoT continues to accelerate, device vendors need to ensure their products can adapt to the diverse and evolving requirements of multiple ecosystems, and reliably work across major IoT hubs,” said Tobin Richardson, President and CEO, Zigbee Alliance. “The All Hubs Initiative is not just an important effort in strengthening interoperability, but a phenomenal example of how global industry leaders and innovators come together within the Zigbee Alliance to share best practices and solve industry-wide challenges.”

The All Hubs Initiative is not a specific version of Zigbee technology, but rather a list of features that will contribute to the core Zigbee roadmap. More specifically, they are a set of updates to the Zigbee specification at both the application and network layers that maintain the flexibility of Zigbee to meet diverse market needs, while improving interoperability.

Key to maintaining this flexibility, these improvements establish a robust method by which hubs can communicate their supported and required features to new devices that join their networks, and how those devices should configure themselves appropriately. They also further standardize the process of commissioning and operating Zigbee devices based on the best practices and real-world experience of Alliance member companies. These updates will be part of Zigbee 3.1 — the next iteration of the Zigbee standard, which is currently scheduled for release later in 2019.

These updated features and Zigbee 3.1 itself will be backwards compatible with Zigbee 3.0 certified devices and hubs. Some ecosystems however are participating in “early implementations” of the All Hubs Initiative’s features and may request device vendors support these features as part of their “Works With” programs, ahead of the formal launch of Zigbee 3.1.

Bringing the industry together

The project was envisioned in the fall of 2017 at the first Hive Executive IoT Summit organized by the Zigbee Alliance. These exclusive events take place annually around the globe to bring together market-moving individuals and organizations from across the IoT industry. Attendees represent companies from within and outside the Zigbee Alliance and include technology executives, visionaries, and leaders from other standards bodies.

As a result of key discussions on the interoperability challenges facing major platforms, device vendors, and consumers, key ecosystem participants committed to working together to solve these issues. Together, they formed the All Hubs Initiative, which now operates as a technical workgroup under the Zigbee Alliance umbrella. Securing buy-in from leaders throughout the IoT landscape gives the All Hubs Initiative strong backing and diverse support as an ‘all hands in’ effort for the growth of the entire industry.


from Help Net Security http://bit.ly/2WbddMq

Businesses are struggling to implement adequate IAM and PAM processes, practices and technologies

Businesses find identity and access management (IAM) and privileged access management (PAM) security disciplines difficult yet un-concerning.

IAM PAM processes

The results infer that IAM- and PAM-related security tasks may be deprioritized or neglected, potentially exposing organizations to data breaches and other cyber risks. Conducted at RSA Conference in early March 2019, One Identity’s study polled 200 conference attendees on their biggest security challenges and concerns, as well as their workplace behaviors related to network and system access.

Among the survey’s most significant findings are that one-third of respondents say PAM is the most difficult operational task, and only 16-percent of respondents cite implementing adequate IAM practices as a top-three concern when it comes to securing the cloud. Meanwhile, only 14-percent of survey respondents say better employee access control would have a significant impact on their business’s cybersecurity.

These and other findings from the study indicate that businesses are struggling to implement adequate IAM and PAM processes, practices and technologies, and may be overlooking the disciplines’ impact on their security postures altogether.

A significant “identity” crisis

More than one in four respondents cite user password management and more than one in five cite user life cycle management (i.e., user provisioning and deprovisioning) as the most difficult operational task – both well-recognized as basic identity management requirements. Additionally, nearly one in four say Active Directory (AD) is the most difficult system for their business to secure. This is particularly concerning given how prevalent AD is among most organizations.

IAM carelessness in the cloud

When asked to share their top three concerns when it comes to securing the cloud, nearly three in four respondents cited data loss. While 44 percent of respondents selected malicious outsiders and the same percentage selected careless insiders, only 16-percent said implementing adequate IAM practices was a top concern. These results are paradoxical given IAM practices – such as policy-based user access control and multi-factor authentication — can help mitigate both insider and outsider cyber risks.

IAM PAM processes

Have access, will snoop; won’t get caught, will steal

The study also uncovered interesting workplace confessions related to user access and security behaviors. Nearly seven in 10 respondents admit they would look at sensitive files if they had unlimited access to data and systems. More than six in 10 say they would take company data or information if they were leaving and no one would find out. Additionally, more than six in 10 admit to some wrongdoing in their workplace. For example, nearly two in five have shared a password and nearly one in five have sacrificed security guidance in order to get something done quickly.

“Our study results paint a bleak picture of how IAM and PAM are being prioritized and managed within organizations today,” said David Earhart, president and general manager of One Identity. “Looking at the bigger picture, businesses are unnecessarily facing major challenges with IAM- and PAM-related tasks given the technology and tools available today. Our hope is that this study lights a spark for organizations to make a concerted effort to address these challenges and improve their IAM and PAM strategies and practices to avoid cyber pitfalls.”


from Help Net Security http://bit.ly/2WeAe11

Many are seeing the damage of cybercrime and identity theft firsthand

As massive data breaches continue to make international headlines and the Internet is an integral part of our daily lives, consumers are now grasping the risks they face. In a new F-Secure survey, 71% of respondents say they feel that they will become a victim of cybercrime or identity theft, while 73% expressed similar fears about their kids.

damage cybercrime

“These findings are absolutely staggering and show many people are seeing the damage of cybercrime or identity theft firsthand,” said Kristian Järnefelt, Executive Vice President, Consumer Cyber Security at F-Secure.

The survey finds that over half of the consumers have had a family member affected by some form cybercrime (51%). Malware or viruses are the most common threats encountered followed by credit card fraud then SMS/call fraud. One out of four users said that they have been impacted by several forms of cybercrime.

“It’s almost impossible to avoid using the Internet in 2019. Cloud services are now a norm, yet we don’t always know what information about us has been collected, and where it’s stored,” said Järnefelt. “F-Secure’s B2B cyber security teams are already seeing many of these cloud services or businesses becoming lucrative targets for the criminals to steal massive amounts of consumer data.”

Businesses have increasingly accepted the realization that the question is not whether they will be breached but when. This means that even consumers who practice excellent cyber security can suffer the loss of personal data.

“Once personal information has been leaked, it is impossible to get it back. And you may not be aware of potential issues for years,” Järnefelt said. “It is a matter of speed in most of the cases. If consumers can react fast enough, criminals may well find their stolen goods are useless.”

Traditional cybercrime is still more prevalent than identity theft or account take-over, yet latter types of attacks keep increasing. Therefore, a comprehensive approach to consumer cyber security is necessary.

“Consumers deserve the same complete protection we offer our business customers but tailored to how we as individuals use the Internet,” said Antero Norkio, Vice President, Product Management at F-Secure. “We need to cover the full cyber security process from preventing threats from happening to adding new detection and response capabilities to know you’re under a targeted attack.”


from Help Net Security http://bit.ly/2IbsEdJ

Palo Alto Networks to acquire Twistlock and PureSec

Palo Alto Networks has entered into definitive agreements to acquire Twistlock, the leader in container security, and PureSec, a leader in serverless security, to extend its Prisma cloud security strategy.

These proposed acquisitions will further advance the company’s ability to offer the most complete and comprehensive cloud security suite in all critical areas of cloud security. Prisma, used by approximately 9,000 customers worldwide, helps enable a secure journey to the cloud by providing organizations with visibility across the entire cloud environment while consistently governing access, protecting data, and securing applications regardless of location.

With the additions of Twistlock and PureSec to the Prisma cloud security suite, Palo Alto Networks will be uniquely positioned to secure today’s modern applications throughout the entire life cycle, enabling organizations to deliver innovations that are secure, reliable, and scalable.

Today marks another exciting step forward in our commitment to offering our customers the industry’s most complete cloud security offering. We believe that our acquisition of these leading companies will significantly enhance our ability to be the cybersecurity partner of choice for our customers, while expanding our capabilities and strengthening our Prisma cloud security strategy,” said Nikesh Arora, chairman and CEO of Palo Alto Networks.

Twistlock

Palo Alto Networks will pay approximately $410 million in cash to acquire Twistlock. The container security leader combines vulnerability management, compliance, and runtime defense for cloud-native applications and workloads. The company serves more than 290 customers, with more than a quarter on the Fortune 100 list. Twistlock co-founders, Ben Bernstein and Dima Stopel, will join Palo Alto Networks.

“Our vision for a cloud-native security platform is a natural fit with Palo Alto Networks cloud strategy. We have liked-minded teams, and we’re looking forward to accelerating our ability to serve customers and partners on their cloud-native journey together,” said Ben Bernstein, co-founder and CEO, Twistlock.

PureSec

PureSec enables its customers to build and maintain secure and reliable serverless applications. The company provides end-to-end security for serverless functions that cover vulnerability management, access permissions, and runtime threats. The company was recognized as a Gartner Cool Vendor in April 2019. PureSec co-founders, Shaked Zin, Ory Segal, and Avi Shulman, will join Palo Alto Networks. Terms of the PureSec transaction were not disclosed.

“PureSec’s vision has always been to ensure that all serverless applications will be secured at the very highest level. By joining forces with Palo Alto Networks, we will undoubtedly be able to make that a reality much faster. We are humbled and excited about this opportunity,” said Shaked Zin, co-founder and CEO, PureSec.


from Help Net Security http://bit.ly/2HJYE9y

Moogsoft AIOps 7.2 eases the burden of IT operations and DevOps teams

Moogsoft, a pioneer and leading provider of artificial intelligence for IT operations (AIOps), released Moogsoft AIOps 7.2, the latest version of its enterprise platform.

Release 7.2 features groundbreaking new capabilities that ease the burden of IT Operations and DevOps teams by optimizing service assurance. Significant new transparency, efficiency, and customization enhancements include: a new workflow engine, AI visualizations, performance dashboards, and new tool integrations.

“Operations teams seek ways to tame the complexity of their IT environments and make sense of frequent alert storms,” said Nancy Gohring, senior analyst for application and infrastructure performance at 451 Research. “Applying sophisticated analytics, including machine learning, is useful for improving the correlation of alerts and delivering better visibility. Automation enabled by this visibility will have a positive effect on incident response time and increase overall ops team productivity.”

New workflow engine manages workloads, automates ticketing & notifications

Moogsoft AIOps 7.2’s new Workflow Engine provides IT Ops teams the ability to visually create sophisticated custom workflows using a simple but powerful user interface. A rich set of workflow options can trigger actions both within Moogsoft and to external systems for actions such as notifications, ticket creation, and other automated tasks.

The Workflow Engine simplifies conditional event processing with enrichment of event alert data, enabling automation of incident management workflows as well as integration with automated remediation tools.

Situation visualization increases transparency, understanding of how algorithms work

Situation Visualization provides powerful new visual tools for understanding the operation of Moogsoft’s alert clustering algorithms and, if needed, for fine-tuning them. Similarity clusters are presented as radar charts for each Situation. They provide a window into how the system’s automated decision making works. Users can understand at a glance the matching criteria for those events that have been correlated together into a single Situation.

Together with Probable Root Cause, Topology, and other visualizations, Moogsoft’s Situation Room offers real-time situational awareness to IT Ops and DevOps teams.

Customization features conform to customers’ unique organizational needs

Moogsoft AIOps 7.2 introduces a number of new features that personalize and configure the platform for a customer’s unique environment and organizational requirements. These comprise:

  • Situation room headers. The information presented in Situation Room headers can be easily customized to improve operational efficiency. Team members can understand the Situation at a glance and decide on next steps.
  • Individual statistics. A new analytics dashboard called Individual Statistics allows managers to drill down from the team level to better understand the workload and key performance indicators of each individual team member. This insight allows team leaders and all members to optimize work distributions and overall operational effectiveness.
  • New tool integrations. Moogsoft AIOps platform continues to expand its broad suite of out-of-the-box integrations for faster time to value. New integrations include connectors to New Relic Insights, Microsoft Teams, and proxy support for all polling integrations (e.g. Zenoss, Zabbix, vCenter, vSphere, Solarwinds, Spectrum, and SevOne).

“AIOps is gaining momentum streamlining IT Operations as well as DevOps,” explains Phil Tee, Chairman and CEO of Moogsoft. “We’ve built the AIOps market from the beginning, pioneered the way with over 50 patents, and now help over 120 of the largest corporations transform their IT service assurance. Today we’re delivering the next-generation platform to democratize the use of AIOps at all organizations. Our goal is to make Moogsoft the solution of choice for all enterprises – large and small – for agile, proactive event resolution. To this end, release 7.2 empowers enterprises to avoid outages, meet service level agreements, and accelerate digital transformation.”


from Help Net Security http://bit.ly/2Kdf7VF

SailPoint Predictive Identity platform: The future of identity governance

SailPoint, the leader in enterprise identity governance, unveiled the SailPoint Predictive Identity platform, the intelligent cloud identity platform of the future that accelerates the industry to the next generation of identity governance. With SailPoint Predictive Identity, SailPoint is delivering a new world of adaptive security and continuous compliance that makes identity easy, transparent and autonomous.

“The next phase of identity needs to anticipate user access needs, spot and respond to risky behavior, achieve continuous compliance and adapt security policies to respond to today’s dynamic business environment,” said Paul Trulove, SailPoint Chief Product Officer. “Our customers turn to us as the industry leader to constantly innovate by delivering the next generation of identity that addresses their ever-changing business and IT needs. We are once again redefining the boundaries of what a comprehensive identity governance program should be capable of with SailPoint Predictive Identity.”

SailPoint Predictive Identity is built on big data and machine learning (ML) technology which enables an AI-driven approach to identity governance, taking identity from reactive to predictive and autonomous. SailPoint Predictive Identity:

  • Automates identity processes using AI-driven recommendations while finding new areas of access and bringing them under governance with auto-discovery.
  • Provides predictive modeling allowing for instant discovery and creation of access policies using ML, while ensuring access is always up-to-date with current business needs.
  • Drives adaptive security powered by AI, alerting security professionals when potentially dangerous behaviors are detected and, with peer group modeling, identifying hidden risk due to inappropriate access.
  • Delivers continuous compliance, quickly shaping and evolving compliance policies with AI-suggested policies and, using machine learning, launches targeted certification campaigns on risky users and areas of access.

SailPoint Predictive Identity capabilities will be integrated into IdentityIQ 8.0 and the newest release of IdentityNow, both available in June 2019.

“There are two factors at play among organizations of all sizes today,” Trulove continued. “First, the velocity of today’s business environment is moving at a break-neck pace. This is compounded by the fact that today’s IT landscape is highly complex given the number and diversity of users, applications and data that organizations now manage. Identity and business teams simply cannot keep up with manually analyzing identity data and patterns to understand whether the right users have access to the right systems and data throughout the organization. AI and ML will play an increasingly critical role in how businesses adapt access models as the business evolves, either autonomously or through recommendation-based identity governance processes.”

“Our identity program is ever-evolving, particularly given how much our own IT environment continues to change thanks to our ongoing efforts to increasingly digitize our business,” said Shawn Lawson, Head of IT Engineering & Operations, Silicon Valley Bank. “As an identity practitioner, it is an exciting time to be in identity. The idea of being able to drive a more predictive versus reactive identity program, one that adapts to our business, security and IT needs is a welcome one.”

“In collaboration with alliance partners like SailPoint, Accenture is taking identity to the next level by making it easier for organizations to deploy and manage programs across their business,” said Rex Thexton, a managing director at Accenture who leads its Digital Identity practice. “Thanks to advanced technologies, more companies globally can now easily adopt an identity program, expediting the time it takes to establish the right roles and access rights. We are eager to work together with SailPoint and our clients to help them accelerate the adoption of identity governance as a critical foundation for a secure digital transformation.”


from Help Net Security http://bit.ly/2VYrnM3