Thursday, January 31, 2019

Safeguarding your data from human error and phishing attacks with the cloud


This is the third article of a series, the first article is available here, and the second one is here.

In a world of ransomware attacks, companies should prepare for the worst-case scenario by having smart backup strategies in place to mitigate any potential damage. The public cloud ensures that your information is always backed up and encrypted. Encrypting backup files in the cloud adds an extra layer of protection against unwelcome external parties.

Unlike many other systems, cloud providers have the resources to employ teams that stay one step ahead of hackers. Even if they someone did break in, encrypted your files themselves, and asked for money in exchange for the decryption key, having backups of your cloud data lets you restore a clean version of your files.

Companies also need to insulate themselves from human error. Employees often accidentally delete company files or make unwanted modifications. Imagine if someone edited a PowerPoint slide deck for an upcoming presentation and removed all of the important slides. While the file hasn’t been deleted, all of the vital slides are gone unless it has been backed up. Using public cloud platforms ensures that you can customize permission settings and access past versions of documents to resolve any man-made mistakes.

Hopefully, the implementation of other security measures will mean that you’ll never need to worry about accessing your backup data. However, it’s important to cover your bases. If there are secure backups in place, your company will not have to worry about compliance or experiencing productivity losses in the event of a security breach.

Compliance

The introduction of new GDPR regulations in Europe has put data compliance back in the public eye. However, the reality is that companies have been navigating the complex and constantly evolving world of privacy law for some time. With the latest regulations, companies are no longer able to hide breaches. If they do, they face fines of up to €20 million or 4% of their annual revenue. Governments are also bolstering standards to ensure that companies aren’t cutting corners when it comes to security and privacy.

GDPR, for example, is the EU legislation that applies to any organization that handles the personal data of European residents. Under GDPR, companies must control precisely where and how this information is stored. In addition, the people that they collect it from can ask for it to be updated or deleted at any time. Companies that don’t comply with requests are subject to hefty fines. Financial penalties and lawsuits aside, organizations should comply with GDPR and other government regulations because it’s simply good business.

The burden that regulations, like GDPR, place on companies is daunting. Securing your business processes with a cloud platform helps simplify corporate compliance because public cloud companies are required to maintain their own set of compliance standards.

Training and awareness

Employees, in general, don’t have a high level of computer security knowledge. However, they need to exercise caution and avoid risky practices. Companies have a responsibility to bridge the natural skill gap by providing training and awareness programs that help to prevent well-meaning employees from doing things like accidentally uploading a malicious program to the organization’s network or inadvertently sharing confidential documents.

While the IT and executive leadership teams may have buttoned up their network from external threats, unsuspecting internal users can be a hacker’s best friend. Make sure that company-wide training initiatives are conducted regularly and include the best practices for:

  • Downloading files and using unauthorized devices
  • Suspicious links and email phishing
  • Social engineering
  • Personal device maintenance and safeguards
  • Passwords
  • Reporting a security threat.

Where should companies focus their team’s attention? While employees require training on all of these topics, there is one preeminent security threat: email phishing. With 76 percent of businesses reporting a phishing attack during the past year, phishing attempts have grown by 65 percent. While the methods of attack are varied, from posing as retailers or banks to “whale phishing,” where an individual with access to large sums of money or confidential company information is targeted, these cyber attacks are expensive for companies. The average cost of a successful phishing attack for a mid-sized company is $1.6 million. Educating employees about the warning signs of a phishing attempt and offering clear reporting instructions is essential.

Summary

Given the potentially huge financial gains, hackers will always be trying to break into your systems and human error will continue to put your data at risk. To protect yourself, your employees, and your company, you need to put everything in the public cloud. These solutions are able to keep your company’s data secure by:

  • Using their extensive resources and expertise to ensure that your network and infrastructure stay secure.
  • Automatically implementing software and security updates without service disruptions or the need to coordinate with other departments.
  • Allowing you to set up customized document permissions and integrated workflows to increase security and improve productivity.
  • Automating file management to minimize the risk of human error.
  • Providing access controls and change logs to minimize files’ exposure to unwanted modifications and sharing.
  • Using aggregated audit data to identify and investigate suspicious events and creating automated alerts that allow you to immediately respond to security breaches.
  • Automatically backing up and encrypting your files, protecting you from ransomware and providing you with a secure file repository in the case of a security breach.
  • Making it easier to stay compliant with your industry’s regulations.
  • Offering user-friendly security controls, like two-factor authentication, that makes training employees easier while also providing your company with an extra layer of security.

IT security is an arms race and the public cloud providers have access to the latest technology and top experts. They employ the best and brightest whose full-time jobs are to protect your information from hackers and malware. While the public cloud will provide you with secure infrastructure, even the best infrastructure is not enough. As we’ve seen, human error also poses a major security threat. Fortunately, this problem can be solved with proper training and process automation features from document management tools.


from Help Net Security http://bit.ly/2t0fMQs

Is your organization ready for the data explosion?


“Data is the new oil” and its quantity is growing at an exponential rate, with IDC forecasting a 50-fold increase from 2010 to 2020. In fact, by 2020, it’s estimated that new information generated each second for every human being will approximate to 1.7 megabytes. This creates bigger operational issues for organizations, with both NetOps and SecOps teams grappling to achieve superior performance, security, speed and network visibility.

This delicate balancing act will become even more difficult if organizations don’t prepare for the continuous data explosion.

The enterprise’s network challenge

Perhaps the biggest challenge organizations face is how to consume large quantities of data at an optimal pace, while still achieving the visibility needed to gain actionable insights. The emergence of the Internet of Things and an increased 5G network buildout will add to the problem in the next few years. As a result, organizations will have to support future demand by investing in local area network/wide area network (LAN/WAN) infrastructure as well as network functions virtualization (NFV) with software-defined networking (SDN) controls.

Starting in 2019, organizations need to evolve to be able to accommodate the data explosion – or risk falling behind. “Slow is the new down” in this fast-paced, always-on era, so it’s time for businesses to realize the need for speed when optimizing their network.

Enterprise social data

During the past year, data privacy took center stage, with Facebook experiencing major backlash due to the way they handle user data. Organizations can learn a lot from Facebook’s recent headlines, especially as they continue adopting new social media programs and tools to further enhance exposure on these channels. Moving forward, enterprises must pay close attention to their social data to ensure it is being properly secured.

Enterprises will also have to evaluate their use of social media and the data accessible through these channels. Given the loose laws currently in place, organizations can expect to begin hearing preliminary discussions of applicable laws at a regional level, like GDPR and other legislation that moved forward this past year. Such laws will govern the sharing and storage of data derived from social media, as well as best practice guidelines to ensure compliance within organizations ranging from SMBs to large enterprises.

Security at the forefront

A key concern that has emerged for most organizations is the state of security. As new devices are introduced into the network at an alarming rate and as companies continue to virtually connect employees around the globe, it’s critical to secure the entire IT system, from the network to the data and beyond. With an increased attack surface, security processes must protect not only the perimeter, but data in motion and at rest.

IoT devices are largely unprotected and therefore especially vulnerable to attack. Cyber criminals will continue to go after sensitive data, leveraging this ongoing data explosion. Specifically, we can expect nation-state threat actors to continue perpetrating major data breaches. In the new year, North Korea will become more daring as the White House turns their heads to focus on finding a solution to nuclear weaponry. These factors are bound to have large brands and government organizations on edge when it comes to cybersecurity, as no one wants to face the public scrutiny following a major data breach.

Beyond nation-states, organizations will continue to see successful attacks carried out through spear phishing. The tactic will also become more targeted than ever before as more and more personal data becomes available on the dark web. Due to this uptick in spear phishing, enterprises will begin to see the rise of even more specialist tools for both NetOps and SecOps teams. These tools will require greater optimization of the network as well as security.

Is your organization ready?

In 2018, the need for improved detection, response and privacy drove the demand for security products and services. In 2019, organizations can expect this continued demand given the increasing amount of data that passes through the business each and every day.

Data will be at the forefront of business discussions across the entire organization, from the C-suite, to NetOps and SecOps teams, down to the interns. It’s no longer about speed or security, but about reaching peak performance while achieving both and more – all while having an increasing number of devices and data within systems. Is your organization ready for this explosion?


from Help Net Security http://bit.ly/2HGKjNn

Employees report 23,000 phishing incidents annually, costing $4.3 million to investigate

Account takeover-based (ATO) attacks now comprise 20 percent of advanced email attacks, according to Agari’s Q1 2019 Email Fraud & Identity Deception Trends report. ATO attacks are dangerous because they are more difficult to detect than traditional attacks – compromised accounts seem legitimate to email filters and end users alike because they are sent from a real sender’s email account.

phishing incidents investigation

Credential phishing was already a huge risk for organizations because of the potential for data breach, but now there is a new wave of account takeover attacks leveraging compromised accounts to commit additional fraud, which evade traditional email security controls,” said Crane Hassold, Sr. Director of Threat Research, Agari. “Business email compromise attacks are still very active, especially against C-suite targets.”

Advanced email attacks

Brand impersonation remains the most common attack vector, used in 50 percent of advanced email attacks in the fourth quarter of 2018—with Microsoft impersonated in 70 percent of these instances. Microsoft is a common target for credential phishing because Office 365 accounts can be used in subsequent ATO attacks.

A different pattern emerges for executive targets: one-third (33 percent) of advanced email attacks against C-level employees use display name deception that impersonates an individual—a common tactic for business email compromise (BEC) attacks, which frequently target CFOs.

Impersonation of the U.S. Internal Revenue Service surged in the fourth quarter as tax season approached. The IRS was impersonated in nearly one in ten attacks, up from less than one percent in the July-to-September quarter. W-2 scams are common in the runup to tax season, as criminals use phishing emails and social engineering to request a corporation’s W-2 files, which contain social security numbers, salaries and other confidential data that can be used to commit tax fraud or identity theft.

DMARC adoption

Adoption of DMARC, an email authentication standard, grew steadily during Q4 with a 15% increase in total DMARC records compared to Q3 ‘18. As the number of valid Internet domains has increased from 283 million to 323 million during this Q1 report, DMARC adoption among these domains increased from 5.3 million to 6.1 million.

Among the Fortune 500, DMARC adoption was only 54 percent, up from 51 percent three months ago.

phishing incidents investigation

The impact of phishing incident response

In a survey of more than 300 businesses in the U.S. and U.K., Agari determined that employees at the average company report 23,053 phishing incident reports per year—yet 50 percent are false positive reports. Responding to a phishing incident takes an average of 353 minutes (almost six hours); and even false positives take an average of 238 minutes (four hours).

All of these reports and hours add up—at a cost of $253 per phishing incident—or more than $4.3 million per year in SOC costs to required to triage, investigate and remediate phishing incidents.

“Many organizations’ security operations teams report that their work around investigating suspected phishing emails is heavily repetitive and requires many meticulous steps, such as checking multiple blacklists and different IT systems within the company,” reports Gartner Research VP and Distinguished Analyst Anton Chuvakin and VP Analyst Augusto Barros in Preparing Your Security Operations for Orchestration and Automation Tools, in February 2018.


from Help Net Security http://bit.ly/2RxFh5D

Companies getting serious about AI and analytics, 58% are evaluating data science platforms

New O’Reilly research found that 58 percent of today’s companies are either building or evaluating data science platforms – which are essential for companies that are keen on growing their data science teams and machine learning capabilities – while 85 percent of companies already have data infrastructure in the cloud.

evaluating data science platforms

Companies are building or evaluating solutions in foundational technologies needed to sustain success in analytics and AI. These include data integration and Extract, Transform and Load (ETL) (60 percent), data preparation and cleaning (52 percent), data governance (31 percent), metadata analysis and management (28 percent) and data lineage management (21 percent).

Companies are building data infrastructure in the cloud. Eighty-five percent indicated that they had data infrastructure in at least one of the seven top cloud providers, with two-thirds (63 percent) using Amazon Web Services (AWS). The results also showed that users of AWS, Microsoft Azure or Google Cloud Platform (GCP) tended to use multiple cloud providers.

The use of durable cloud storage is prevalent. Sixty-two percent of all respondents indicated they used at least one of the following: Amazon S3 or Glacier, Azure Storage, or Google Cloud Storage.

Data scientists and data engineers are in demand. When asked what skills their teams needed to strengthen, 44 percent said data science and 41 percent said data engineering.

Respondents used a variety of streaming and data processing technologies. Half of the respondents (49 percent) used either Apache Spark or Spark Streaming, while other popular tools included open source projects (Apache Kafka, Apache Hadoop) and their related managed services in the cloud (Elastic MapReduce, AWS Kinesis).

Business intelligence uses a mix of open source and managed services. When it comes to SQL, respondents favored open source tools (Spark SQL, Apache Hive) and managed services in the cloud (AWS RedShift, Google BigQuery).

Although 60 percent aren’t using serverless technologies, 30 percent are already using AWS Lambda. In fact, 38 percent indicated that they were using at least one serverless technology – a pattern that remained consistent across geographic regions.

“It is clear that in 2019 companies are planning to invest in implementing analytics, AI and automation tools,” said Ben Lorica, O’Reilly’s chief data scientist and chair of the Strata Data Conference. “However, in order to do so successfully, initial investments must be made in the foundational technologies and infrastructure needed to sustain success. Our research shows that a majority of companies understand this and are already building – or at the very least evaluating – platform solutions and tools to make this possible.”


from Help Net Security http://bit.ly/2t2zgni

Deloitte launches new proprietary solution to help manage records disclosure and data privacy

In response to increasing demand for disclosure of government records and mounting regulatory requirements for personal data privacy, Deloitte launched a workflow management platform. Built on Relativity’s ediscovery platform and hosted in Deloitte’s FedRAMP-authorized environment, Deloitte’s disclosure solution is designed to help Deloitte clients manage information requests, create Freedom of Information Act (FOIA) responses and reports and manage data privacy.

In coming months, Deloitte’s disclosure solution will also be available on Relativity’s cloud-based software as a service (SaaS) platform RelativityOne.

“The amount of data that government agencies must identify and review to respond to requests for disclosure of information is growing, as is the volume of requests government agencies face from their own and other agencies, Congress, opposing counsel in litigation and the general public,” said Chris Knox, Deloitte Risk and Financial Advisory managing director and a leader in the Federal discovery practice, Deloitte Transaction and Business Analytics LLP.

“Analysts in U.S. FOIA offices need to efficiently and accurately respond to hundreds of thousands of information requests annually, while also protecting individuals’ data privacy rights. Traditional case management tools simply aren’t built to decrease backlogs, to reduce human error, to meet requester expectations and to compile annual FOIA reporting quickly and efficiently.”

Deloitte’s disclosure solution helps automate redactions, applies analytics to and improves efficiency of responses to information requests, while also offering scalable data privacy regulatory compliance functionality.

Knox continued, “Technologies like robotic process automation, advanced analytics and machine learning have progressed such that they can now help organizations — in both government and private sector — more efficiently and more accurately respond to records requests. They can also help manage data privacy with the scalability to address current regulations as well as potential regulations that may be announced in the future.”

With Relativity offered in 28 countries and supported by more than 130 Relativity-certified professionals, the 600-member Deloitte Discovery team helps clients manage multinational litigation, investigations and regulatory compliance matters.


from Help Net Security http://bit.ly/2Be1Ern

Cradlepoint and Microsoft create integrated solution to simplify and accelerate enterprise IoT projects

Cradlepoint introduced a platform integration with Microsoft Azure that will make it faster and easier for enterprises to “Build Your Own IoT” solutions (BYOIoT). The solution includes Cradlepoint’s new NetCloud Edge Connector for Azure IoT Central to help simplify and accelerate the process of building and deploying IoT applications and devices.

According to a recent study by Cisco, 74 percent of IoT initiatives fell short of achieving success with 54 percent citing the lack of collaboration between IT and other business units and 48 percent attributing it to a lack of IoT expertise.

The new platform integration between Microsoft and Cradlepoint helps provide Operational Technology (OT) teams the freedom and ability to build and deploy IoT solutions that improve business competitiveness and efficiency, while giving IT teams the tools they need to ensure device connectivity at the wide-area-network (WAN) edge.

“The mission of Azure IoT Central is to improve the success rate and time-to-value of enterprise IoT projects by reducing development complexity and IT integration challenges,” said Tony Shakib, general manager of Azure IoT.

“With Cradlepoint, we are extending this mission by addressing one of the key issues that threatens successful IoT outcomes—enabling IT to deliver secure and reliable IoT device connectivity across the WAN.”

Azure IoT Central is a software-as-a-service (SaaS) solution that lets customers build and deploy IoT applications without cloud computing experience or specialized skills. Cradlepoint will join the Azure IoT Central partner program allowing the two companies to collaborate on marketing the solution to enterprise and public sector customers.

Cradlepoint will offer a pre-built integration for its NetCloud service, called NetCloud Edge Connector for Azure IoT Central, that allows organizations to connect IoT devices to applications built on the Microsoft Azure IoT Central platform with visibility, security and control.

For example, a vendor that uses kiosks in airports, campuses, or storefronts can develop and deploy a new IoT solution to monitor the video terminal, the door alarm, the temperature and the WAN connectivity of the application running in the kiosk. Previously, developing such a solution would require complex application programming and introduce network connectivity and management challenges.

“Our State of IoT 2018 highlights the nature of enterprise IoT with 69 percent of customers having either deployed or planning to deploy IoT; with 53 percent developing their own solutions, which includes leveraging their own WAN,” said George Mulhern, chairman and CEO at Cradlepoint.

“The prevailing go-it-alone approach is the catalyst behind our Microsoft partnership and integration with Azure IoT Central. We are committed to helping IT teams deliver secure and reliable connectivity and accelerate IoT time-to-value for their business constituencies while providing a pathway to 5G in the future.”

The new IoT solution will be very beneficial for both Microsoft and Cradlepoint partners. It provides a blank canvas for developing, deploying and managing custom IoT applications for specific customers, use cases and industries without them having to invest in cloud infrastructure or specialized resources.

“The Microsoft and Cradlepoint integration of the NetCloud and Azure IoT Central platforms is a huge win for companies seeking to build their own IoT solutions,” said James Brehm, founder and chief technology evangelist at James Brehm & Associates. “The new partnership promises to help bridge the OT and IT collaboration gap, improve IoT project outcomes and finally put the IT back into IoT.”

The Cradlepoint NetCloud Edge Connector for Azure IoT Central provides an edge-to-cloud connection for sending IoT data into Azure IoT Central and runs on Cradlepoint wireless router solutions for branch, mobile, and IoT networking.

It includes an update to the NetCloud Operating System (NCOS) as well as workflow within NetCloud Manager. The NetCloud Edge Connector for Azure IoT Central is scheduled for limited availability in April 2019 for active subscribers of the Cradlepoint NetCloud service.


from Help Net Security http://bit.ly/2G2LJ2S

Check Point and Ericom Software join forces to tackle browser-based attacks

Ericom Software unveiled the integration of Ericom Shield Remote Browser Isolation (RBI) solution with Check Point Software Technologies Advanced Network Threat Prevention. Combining Ericom Shield remote browsing technology with Check Point threat intelligence and edge security protection generates a defense that enables organizations to stay ahead of attackers, while maintaining user access to browser-based services and assets.

As a defense against threats, Check Point leverages HTTPS inspection, sandboxing, threat extraction, application control, URL filtering and content awareness to secure the organizational edge. By isolating the browser in a secure remote disposable container that is separated from the end-user device, Ericom Shield adds a layer of protection from the vast amount of unknown malware that penetrates organizations via the internet through human error, malicious links and harmful downloads.

“Web browsing is an indispensable business practice in virtually all organizations today. Despite great success in identifying and protecting against threats in real-time, malware continues to penetrate organizations via browsers and wreak havoc,” said Snir Hassidim, Business and Corporate Development Manager at Check Point Software Technologies.

“By integrating the clientless Ericom Shield solution with Check Point’s product line, we enable customers to block malicious content before it approaches internal networks, while preserving a transparent and natural browsing experience. The joint solution can provide effective secure web browsing protection against the advanced 5th generation of cyber-attacks.”

The integrated secure web browsing solution offers:

  • The ability to halt malicious attacks before infection reaches the endpoint,
  • Zero installation on user endpoints for organizational deployment,
  • Support for all browsers, devices and operating systems,
  • Disposal of active web content when session ends to prevent malware from persisting.

Said Daniel Miller, Senior Director of Product Marketing at Ericom Software, “The joint Check Point-Ericom solution offers a unique level of protection. By layering Check Point SandBlast and Ericom Shield, organizations can confidently provide essential access to the web resources users need on a dynamic basis, without additional alerts, or complex configurations.”

Added Miller, “Remote browser isolation technology (RBI) is increasingly recognized as a highly reliable and secure layer of defense for organizations that must protect critical infrastructure while empowering increasingly web-dependent knowledge workers, as recently stated by Gartner. By integrating with Ericom Shield, Check Point, as a pioneering leader in the security field, is enabling organizations to safeguard organizational endpoints and systems while empowering users to make full, unimpeded, and secure use of the Internet resources they need.”


from Help Net Security http://bit.ly/2HRyMuS

Syncurity partners with SentinelOne to accelerate alert triage and orchestrate incident response

Syncurity and SentinelOne formed a strategic partnership and technology integration of the SentinelOne autonomous endpoint protection console with the Syncurity IR-Flow SOAR Platform. The joint solution will enable customers to accelerate alert triage and orchestrate response to threats across all endpoints.

SentinelOne is the only next-gen solution that defends every endpoint against any type of attack, at all stages in the threat lifecycle. Through this integration, customers will be able to ingest threat and incident data directly from SentinelOne into the IR-Flow SOAR Platform to identify and triage suspicious activity. Importantly, they can combine this data with data from other IT and security solutions to provide security analysts with more accurate identification and risk assessment of advanced attacks.

In addition, the Syncurity IR-Flow SOAR Platform can quarantine and remediate any compromised endpoints using the SentinelOne API. The IR-Flow patent-pending Triage Scoring Engine, assesses risk as information from different IT and security tools are evaluated via automated API actions.

The Syncurity IR-Flow Platform identifies alerts, and validates automatically or through guided analyst interactions which situations should be escalated to a security incident, and then orchestrates actions needed to contain and remediate across the enterprise. These actions include changing user passwords, sending email verifications, restarting and scanning hosts, getting device and/or user information, and enabling or disabling two-factor authentication.

They can also generate and list reports, list processes, get files and list applications on a host. The actions can be automated or directed through ticketing system integrations, such as the recently announced ServiceNow app.

“Strategic partnerships of this nature represent the future of the security market – combining autonomous endpoint protection with powerful SIEM capabilities to speed incident response, while helping customers contextualize how they’re mitigating risk,” said Daniel Bernard, VP Business & Corporate Development, SentinelOne.

“This integration will enable customers to see the true story of what’s happening across their network and endpoints, while knowing that they’re fully protected against today’s most devastating threats.”

“The integration of SentinelOne and the Syncurity IR-Flow SOAR Platform pair’s two surging leaders in their respective markets to enable our joint customers to more quickly identify, assess and take action against ever-changing cyber risks,” said John Jolly, CEO, Syncurity. “The combination of the orchestration and automation along with IR-Flow’s robust case management means customers can more effectively measure and optimize their security stack.”

The joint solution will be available through mutual channel partners of SentinelOne and Syncurity, including World Wide Technology and Assurance Data.


from Help Net Security http://bit.ly/2WyPSRm

QuantLR partners with PacketLight Networks to secure next-generation networks

QuantLR LTD and PacketLight Networks will work together to form a more secure optical network by jointly developing an integrated QKD solution.

The partnership came following the recent signing of a Letter of Intent between the two companies, where they will cooperate and share information required for the development of the QKD solution as part of Layer 1 encryption of fiber optic link. The intention is to demonstrate the solution at the site of one of PacketLight Networks’ customers.

“We are happy to collaborate with PacketLight Networks to advance Quantum encryption solutions that are proven to be the only ultimately secured solutions to any eavesdropping and hacking attempts of communication lines in the present and in the future,” said Yanir Farber, CEO of QuantLR.

“The collaboration with a leading company such as PacketLight Networks will accelerate our development process and enables us to offer the market a low-cost solution. The quantum encryption market is predicted to reach a sales volume of more than $24Bn in 2025 and we plan to be a significant player in this market.”

“Data security has become the most important aspect in data center connectivity over fiber and DWDM networks,“ says Koby Reshef, CEO of PacketLight Networks.

“Partnering with QuantLR will allow us to provide an innovative encryption solution leveraging quantum mechanics to maintain a high level of data encryption at an affordable cost, answering our customers’ future security needs as they evolve in both complexity and importance.”


from Help Net Security http://bit.ly/2Bcl0NB

Ixia launches new software for management of visibility solutions

Ixia has launched a new software solution for management of a wired, wireless, and/or virtual visibility solutions. The new Ixia Fabric Controller (IFC) Centralized Manager supports network packet brokers, taps, bypass switches, and cloud visibility solutions via a single graphical user interface (GUI).

Today’s network infrastructure features on-premise, cloud and private data center devices faced with increasing traffic volumes and escalating security threats. According to Cisco, annual global IP traffic will reach 4.8 ZB per year by 2022, or 396 Exabytes (EB) per month.

Network complexity and threat sophistication increases demand that NetOps and SecOps teams automate low-level tasks and network traffic visibility for threat detection, analysis and response, according to Gartner.

“Serious gaps in device management can lead to performance issues, visibility blind spots and security vulnerabilities,” stated Recep Ozdag, vice president of product management for Keysight’s Ixia Solutions Group.

“Professionals responsible for managing enterprise networks need a resource that can provide a comprehensive view of their monitoring devices at any time, despite location. Ixia’s new IFC Centralized Manager delivers the centralized network platform needed to address those gaps, improve performance and reduce blind spots.”

IFC Centralized Manager eases configuration and monitoring of taps, virtual taps, data monitoring switches, and network packet brokers, including third-party devices, through a single pane of glass.

IFC Centralized Manager includes:

  • A network topology map that shows the “up” or “down” status of supported devices and custom status providing visualization of monitoring resources for security, performance, capacity planning and compliance
  • Real-time data acquisition from auto-discovered devices that is archived and presented in graphical formats for snapshot viewing and historical trend analysis
  • An availability feature that supports primary and secondary units (in physical, virtual or combination) in sync with one floating IP address, allowing instant take-over when the master fails
  • Automated task execution across devices for configuration and fault management to simplify script and policy execution and software upgrades
  • Security and user management features for role-based access, user authentication and external authentication based on global policies.

from Help Net Security http://bit.ly/2GciZnP

XebiaLabs drives DevOps innovation following 2018 $100M+ strategic capital investment

XebiaLabs, since receiving a $100M+ strategic capital investment in early 2018, has added a range of new product enhancements that address enterprise DevOps challenges. These innovations further improve an organization’s ability to migrate to the cloud, connect Continuous Integration and DevOps pipelines, manage DevOps as code, tie IT service management (ITSM) tools into the DevOps process, and meet governance requirements while accelerating software delivery.

DevOps enables many paths to customer value

Software development may occur behind the scenes, but its results are very much at the front lines of many businesses as products and services are consumed online. Companies adopt DevOps because it is about optimizing software delivery to maximize customer value. The way organizations use DevOps to achieve that value, however, is not uniform.

According to XebiaLabs CEO, Derek Langone, “Some companies start DevOps as a broad transformation initiative, but we’re also seeing a growing list of organizations focusing on particular projects, like migrating apps to the cloud, scaling up their use of containers, or integrating compliance and security into the software delivery process. That’s why over the past year, we’ve invested heavily in developing new capabilities that help development teams succeed with these initiatives. Our platform builds in flexibility, so a company can approach DevOps on a project-by-project basis and scale up to an enterprise-wide implementation if and when they choose.”

2018 product innovation

The XebiaLabs DevOps Platform allows IT organizations to address their top priorities, from standardizing cloud deployments and scaling containers, to doing DevOps as code and managing security and compliance requirements. In 2018, XebiaLabs further enhanced its enterprise DevOps platform in the following areas:

Cloud migration — Connects Amazon Web Services (AWS) and other cloud platforms to the rest of the enterprise DevOps pipeline. Accelerates and simplifies cloud migration for developers with DevOps as Code and blueprints for AWS, and allows for a software release process that includes governance, compliance, and security requirements. Provides support for a large number of AWS services (see integrations below). In addition to AWS, the XebiaLabs DevOps Platform integrates with many popular public and hybrid cloud services, including Microsoft Azure and Google Cloud.

DevOps as Code — Allows teams to define deployment packages, infrastructure, environments, release templates, and more as code that can be versioned and stored alongside application code. Also offers a CLI for kicking off DevOps processes and exporting configurations, and provides blueprints that help developers push an app to production as code.

Compliance, security, and risk — Includes custody reporting, a new security risk dashboard that combines automated risk assessment with security and compliance information, and compliance overviews that summarize IT governance violations for common standards such as OWASP, PCI 3.2, and CWE/SANS.

Connecting CI/CD and IT service management — Offers visibility and synchronizes data across the DevOps pipeline by connecting issue tracking and ITSM ticketing tools like Jira and ServiceNow, so that user stories and change tickets are always up to date.

DevOps Pack for Jenkins — Provides everything enterprise DevOps teams need to integrate their Jenkins pipelines with their software delivery pipelines and make Jenkins pipeline data available to everyone.

Container migration — Delivers the framework and capabilities required for large-scale container deployments, such as standardization of software release pipelines and deployment processes; application dependency management; visibility into deployment status across all environments; embedded compliance and security with audit tracking; and management of hybrid deployments across environments. Also integrates with Red Hat OpenShift, Kubernetes, and Docker, and is available as a certified OpenShift Docker container in the Red Hat Container Catalog.

Integrations with cloud, container, security, and compliance tools — Enhances support for Amazon EC2, Amazon Simple Storage Service (Amazon S3), AWS Lambda, AWS Fargate, Amazon ECS, Amazon EKS, Amazon Elastic Container Registry (Amazon ECR), AWS CloudFormation, Amazon Elastic Block Store (Amazon EBS), AWS Elastic Load Balancing (AWS ELB), Amazon Relational Database Service (Amazon RDS), AWS CodePipeline, and Amazon API Gateway. Other key integrations include Black Duck, SonarQube, Fortify on Demand, Fortify SSC, Checkmarx, ServiceNow, OpsGenie, Terraform, Cloud Foundry, and Atlassian Crowd.

Other highlights

XebiaLabs continued to extend its position in the enterprise DevOps market by posting record revenue growth for a third straight year. In addition to the business growth, customer acquisition continued to accelerate at a record pace and with marquee customers, such as Amgen, AXA, Bosch, Bpifrance, Maersk, MathWorks, TJX, T-Mobile, and Zions Bancorporation.

The company also added offices in Germany and Spain to service growing demand in Europe, and increased global staff by 40%, with an investment in engineering and product leadership.

In addition, the company released new versions of its most popular DevOps resources, including the “Periodic Table of DevOps Tools,” the “DevOps Diagram Generator,” and The IT Manager’s Guide to DevOps: How to Drive the Business Value of Software, by Tim Buntel and the late Robert E. Stroud.


from Help Net Security http://bit.ly/2BbzT2I

Baffin Bay Networks expands into the US with acquisition of Loryka

Baffin Bay Networks revealed the acquisition of Loryka. Baffin Bay Networks, its team launched its cloud-based threat prevention service in 2017. This acquisition is the latest milestone following the company’s $6.4m Series A investment last year, led by EQT Ventures, accelerating its global expansion.

The company will become the first Baffin Bay affiliate in the United States. The Threat Research centre will be based in Portland, Oregon with additional operations in Virginia, Maryland and Tulsa, Oklahoma.

Zac Lindsey, founder of Tulsa law firm LINDSEYfirm, will head up the company’s US operation, effective immediately. Loryka started out as a research project six years ago, but has since grown into a platform and data pipeline that allows researchers to gain insights into data breaches and security attacks.

Loryka brings IoT security expertise to Europe, a continent that struggles to combat such vulnerabilities. Earlier this month, it was revealed that less than half of European companies can detect IoT device breaches, with UK firms having one of the lowest IoT breach detection capabilities in Europe, second only to France.

Loryka believes in collaboration and the sharing of useful data points, providing researchers with the data they need to innovate and make technical breakthroughs that benefit everybody. The company brings to Baffin Bay Networks a team of developers, researchers, and security professionals who are passionate about data. Loryka has built relationships with major industry partners such as Sierra Wireless and Cradlepoint.

Justin Shattuck, CEO and founder of Loryka, commented on the news: “This move gives us a great springboard to take our research expertise and our philosophy of data sharing to new clients and markets. Joining forces with a cyber security platform like Baffin Bay Networks shows we have transitioned from a data collection company to a genuine threat prevention organisation, providing solutions to corporate end-users at a larger, more significant level.”

Joakim Sundberg, CEO and founder of Baffin Bay Networks, added: “This is a big step for Baffin Bay Networks as we look to expand our reach and create even more value to customers. With this acquisition we will have a threat research team in the US with great local knowledge and experience, meaning we will be able to provide clients with unrivalled threat intelligence and services to help them protect their businesses.”


from Help Net Security http://bit.ly/2MKjCX4

I Cut Microsoft Out of My Life—or So I Thought

Week 4: Microsoft

When I initially planned to block all the tech giants from my life, I hadn’t thought to include Microsoft, mostly because Microsoft is—these days, at least—rarely on the receiving end of criticism for destroying civilization as we know it.

Microsoft’s days as a tech supervillain are a distant memory, dating back to the 1990s when 20 states, along with the U.S. Department of Justice, assembled like Voltron to take the tech company down for violating antitrust law.

But then I’m reminded that Microsoft is a web hosting giant when I see news in August that it threatened to pull its hosting services from Gab because of the social network’s anti-Semitic content. And since November, Microsoft has been competing with Amazon and Apple for the title of most-valuable public company in the world. This all forced me to admit that Microsoft is still fully deserving of its inclusion in the “Frightful Five” along with Amazon, Google, Facebook, and Apple. If nothing else, I think, it will be interesting to see the long-term effect of that decades-old antitrust crackdown: Will it be easier to block Microsoft because the government tried, at the turn of the 21st century, to prevent it from unfairly dominating the computing industry?

To prevent myself from using any of Microsoft’s services, I connect my phone, computer, and smart devices to a custom VPN designed for me by technologist Dhruv Mehrotra; it blocks the 21,573,632 IP addresses controlled by Microsoft. If you’re like me and exclusively use Macs, you might think you don’t use Microsoft very often. But it operates the workhorses of social media—LinkedIn, Skype, and Github—as well as a big distraction from work in the form of Xbox. During the block, I can’t use any of them, nor can I connect to websites and apps hosted by Microsoft Azure, its rapidly expanding cloud business.

Even though I don’t use any Windows machines, don’t own an Xbox, and don’t turn to Microsoft Office for document creation, the company still turns out to be tricky to block, not so much online, but in the real world, where Dhruv and his VPN can’t help me. In one surprise example, I run into the Redmond giant in my car—a 2015 Ford Fusion, which I have from a long-term rental service called Canvas. I’ve been driving it for weeks but only now notice a placard on the center console that reads, “SYNC, powered by Microsoft.” Turns out, Microsoft’s technology powers the car’s entertainment and navigation system, so I have to drive to work in silence.

Advertisement

(This is actually one of the last Ford models where that’s the case; Ford dumped Microsoft reportedly because its software was too buggy. Now Ford offers services from Google and Amazon. “Ford and Alexa, a match made in tech heaven,” claims Ford’s website, which sounds like anything but my idea of the divine.)

When I tell Dhruv about this, he points out that there are many more places I could potentially be using Microsoft services without realizing it, like when I buy coffee at a coffee shop that uses Windows as the operating system for its payment system or when I use public transportation that uses Microsoft to power its back-end services. As the New York Times points out, Microsoft is “mainly a supplier of technology to business customers.”

That means that Microsoft is virtually impossible to completely avoid without also retreating from society entirely, which, at least for me, isn’t an option. Just as Amazon was inescapable on the web, Microsoft is unavoidable IRL.


So Microsoft is in many ways a “B2B” company these days, and it’s undeniable that I rely on services at some point this week that use its services when I patronize restaurants, coffee shops, stores, or anywhere else where monetary transactions happen. But in terms of direct consumption of Microsoft products, this is the easiest week in my tech-giant-blocking experiment so far. Microsoft is still a behemoth but one whose impact is hard for me to measure in this experiment, because many of its billions of dollars come from products like Windows Servers that are used to power government and corporate infrastructure, rather than being used directly by consumers.

Not to say consumers aren’t using Microsoft products on a large scale: Windows still accounts for 40 percent of all operating systems accessing U.S. government websites, including iOS and Android, which is a pretty good indication of its general prevalence. It’s just not an issue for this consumer. As I admitted in the intro to this series, it reflects my own tech biases. Avoiding the company while functioning in society is probably impossible, but it is possible, I find, to avoid personally using Microsoft’s products.

Maybe this is the way things would have gone regardless of what happened in the 1990s. Maybe this was the kind of company Microsoft was fated to become. Or maybe, if the government hadn’t intervened decades ago to keep Microsoft from dominating the world of computers, we’d all still be using Microsoft-owned Hotmail and surfing friend feeds on Microbook and posting our photos to Microgram and Binging our latest health concern.

Advertisement

That decades-old Microsoft antitrust case was sprawling and complicated in the way that any legal matter is, but it boiled down to a rather simple catalyst. Windows was the dominant operating system 30 years ago, as it is on PCs still today, and the internet was only just starting to develop. In 1994, a company called Netscape released a popular internet browser called Navigator that it was selling for about $50, and Microsoft decided to undercut it.

To try to ensure its dominance in the growing business that was the internet, Microsoft developed its own internet browser called Internet Explorer, gave it away for free, and insisted that it be bundled with Windows. So when you bought a computer—which was probably operating Windows because most then computers did—you’d get Internet Explorer installed by default the same way you get Safari pre-installed on your iPhone or the Google Play Store pre-installed on your Android phone, which gave Internet Explorer a distinct advantage.

Microsoft was using its powerful control of the computer operating system supply line to muscle its way into controlling people’s internet experience (Netscape eventually made Navigator free as well, helping to lay the groundwork for an internet where almost everything is “free” but monetized instead via our attention and data.) Regulators worried that Microsoft was using its dominant position in the software industry to crush competitors and would-be competitors, and so they sued.

Giving Internet Explorer to people for free was seen as ultimately hurting consumers, which is a version of antitrust law that American regulators have since mostly abandoned, though activists like the Open Markets Institute are pushing for it to be re-embraced. It’s an approach that Europe recently adopted, as evidenced by its antitrust crackdown on Google last year; European regulators fined Google $5 billion for making its search engine the default and including the Google Play store and the Chrome browser for free in Android operating systems, which are used by 80 percent of smartphones.

The government originally hoped to break Microsoft up into two companies (one that made operating systems software and another that operated the rest of their products), which is similar to what tech company critics are calling for today for companies like Facebook and Google.

But the only concessions the government ultimately got from Microsoft after a years-long battle were a promise not to conspire to keep competitors from being excluded from new computers and a commitment to make Windows interoperable with non-Microsoft software. Still, that was significant, according to law professor Tim Wu and U.S. Senator Richard Blumenthal, who wrote in a New York Times op-ed that those concessions opened the door for the rise of new technology companies:

Advertisement

[W]hat we do know is that the remedy pushed Microsoft to act with more caution, creating an essential opening for a new generation of firms. It might seem like a cruel irony that the immediate beneficiaries of the Microsoft antitrust case—namely, Google, Facebook and Amazon—have now become behemoths themselves. But this is how the innovation cycle works: It creates room for saplings to grow into giants, but then prevents the new giants from squashing the next generation of saplings. (Microsoft was itself, in the early 1980s, the beneficiary of another antitrust case, against IBM, the computing colossus of its time.)

The then-new technology companies that thrived due to the government throttling Microsoft’s growth are now dangerously large and powerful, according to antitrust critics. But regulators, in the U.S. at least, have raised very few concerns about monopolies. Amazon, Facebook, Google, Microsoft, and Apple, combined, have bought over 400 companies and start-ups over the last decade, with none of the acquisitions facing pushback from regulators, as the Wall Street Journal points out.

“Today’s titans tower over their kingdoms, secure behind their walls of user data and benefiting from extreme network effects that make serious competition from startups nearly impossible,” wrote Antonio Garcia-Martinez in Wired recently about the lessons learned from the Microsoft legacy. “U.S. antitrust laws, written in the industrial age, don’t capture many of the new realities and potential dangers of these vast data empires. Maybe they should.”


Over the course of my week blocking Microsoft, my devices try to send over 15,000 data packets to the company’s servers, or just as much data as they tried to send to Facebook when I was blocking it—not much compared to Google (over 100,000) or Amazon (nearly 300,000). Most of the interaction with Microsoft is a steady stream of about 1,000 packets each night that mystifies me and Dhruv until we realize it’s when I open up my library book app to read before going to sleep, an app whose data must be hosted by Azure. I could read what I had already downloaded—the Wheel of Time books, because I’m a sucker for fantasy series destined for TV—but because of the attempted interaction with Microsoft in the background while I am doing so, I abandon the book for the week.

I’ll reiterate here that this low level of interaction with Microsoft might be unique to me, or at least unusual. Lots of readers probably have a Windows machine at work, or watch their favorite shows on a Surface tablet, or use Outlook for their corporate email, and wouldn’t find the Microsoft block as seemingly easy as I did. And even I, who thought I only relied on Microsoft for LinkedIn, Skype, and apparently, my car’s radio, realized through this exercise that I probably interacted with it in the real world, in coffee shops or paying my fare on the bus, in ways I couldn’t capture this week.

The big difference between Microsoft and the others in the Big Five is that it’s been forced into the shadows while the others are freely operating their respective empires right in our faces all the time.

Advertisement

So if the conclusion is that I can live (sort of) without Microsoft today because of the government’s antitrust crackdown in the 90s, the question is what the government should do now about the behemoths I am finding I can’t live without.

Next up: Apple

This series was supported by a grant to Dhruv Mehrotra from the Eyebeam Center for the Future of Journalism.


from Lifehacker http://bit.ly/2CVm7Bg

SNL's Hand-Written Cue Cards Are a Good Hack

Hey, I love a teleprompter. I wrote about how good teleprompters are just two hours ago. But SNL doesn’t use them; they use cue cards, as they love to point out now and then in meta sketches. And they show their cue card process in the video above, which is way more fun than the thumbnail implies.

The process is laborious—it takes a team of eight people, plus support from other staff, to write, re-write, triple-check and display the cards. The card writers have to learn how to properly space out their letters, how to manually erase lines, how to flip through the cards without fumbling, while the actor reads off of them, on live television. But it’s easy to see why they keep this seemingly archaic system around.

For one, some of the team has decades of experience doing this, and any changeover would introduce new problem. Start printing the cards and you’ll inevitably have a printer breakdown, and no one left to properly hand-write them.

Advertisement

But more importantly, hand-written cue cards are part of the charm of the show. If you’re still a fan of SNL, it’s probably not thanks to Alec Baldwin’s Trump impressions or the extra edition of Weekend Update. It’s the stuff that’s been special since the beginning—and a few new things, like digital shorts, that manage to still feel like the fun class project that SNL always is. (This is why the funniest digital short is Laser Cats.)

Actors cracking up and breaking character, the host getting “interrupted” during the monologue, sketches that go backstage, Tom Hanks becoming a meme, Fred Armisen and Kristen Wiig improvising a duet, John Mulaney rewriting the cue cards to make Bill Hader laugh—this is the stuff that separates SNL from a million YouTube channels. This is the show celebrating its medium and using the special power of a live show, of celebrity guests, of a comedic institution whose biggest critics can still name a dozen sketches they love love love. This is why they keep hand-writing the cue cards.


from Lifehacker http://bit.ly/2BgOI3T

We Tested a Teleprompter That You Control With Your Voice

When you shoot a video with a teleprompter, you usually have three options: find someone to sit and scroll the words manually; set an app to auto-scroll and hope you can keep up; or handle a remote, which costs money and distracts you during your delivery. We were excited to discover Teleprompt.me, the voice-controlled teleprompter that really works. We covered it briefly earlier, but now we’ve tested it on camera to show you how seamless it really is. No one is controlling the teleprompter in this video!

As you’ll see, Teleprompt.me is hard to trick, even with last names and other non-dictionary words. It kept up as I talked really fast, and it didn’t break when I skipped or added a few words in my script. Frankly, it’s surprising to find an app that does its job so well.

Advertisement

The prompter includes some standard features like mirroring, so you can connect it to a professional teleprompter camera peripheral. Or you can use it to record on your webcam, or prop up your laptop right next to a regular camera.

Downsides

There are only two meaningful flaws: One, Teleprompt.me only works on Chrome on desktop. So you can’t use it with those nifty smartphone teleprompter mirrors. You’ll have to buy an app like the $20 voice-activated PromptSmart Pro (iOS/Android).

Advertisement

Two, you can’t give any voice commands like “restart” or “go back.” So if you want to do some re-takes, you’ll have to go hit the arrow keys. But you’d have to do that with other software too.

Beyond that, there are only tiny flaws: The microphone might also mess with any sounds you have playing on your computer while you record. So lay those in separately. It only displays plain text, so no italics or different font sizes.

Because the site requires access to your computer’s microphone, it’ll listen to you until you close the tab or change your permissions. The site also saves the script you’ve pasted in, until you clean your cache. Don’t use it to read aloud state secrets.

Advertisement

If you come across a script that Teleprompt.me can’t handle, there are plenty of free apps that use buttons or a set speed, including CuePrompter and Autocue. But honestly, Teleprompt.me is so good that we plan to use it for all our scripted videos in the Lifehacker studio.


from Lifehacker http://bit.ly/2MP9GM1

Security Flaws in Children's Smart Watches

A year ago, the Norwegian Consumer Council published an excellent security analysis of children's GPS-connected smart watches. The security was terrible. Not only could parents track the children, anyone else could also track the children.

A recent analysis checked if anything had improved after that torrent of bad press. Short answer: no.

Guess what: a train wreck. Anyone could access the entire database, including real time child location, name, parents details etc. Not just Gator watches either -- the same back end covered multiple brands and tens of thousands of watches

The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control!

This means that an attacker could get full access to all account information and all watch information. They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users' emails/passwords to lock them out of their watch.

In fairness, upon our reporting of the vulnerability to them, Gator got it fixed in 48 hours.

This is a lesson in the limits of naming and shaming: publishing vulnerabilities in an effort to get companies to improve their security. If a company is specifically named, it is likely to improve the specific vulnerability described. But that is unlikely to translate into improved security practices in the future. If an industry, or product category, is named generally, nothing is likely to happen. This is one of the reasons I am a proponent of regulation.

News article.


from Schneier on Security http://bit.ly/2Ruqx7a

New Mac malware steals cookies, cryptocurrency and computing power

A new piece of Mac malware is looking to steal both the targets’ computing power and their cryptocurrency stash, Palo Alto Networks researchers warn.

Mac malware steals cookies

About the CookieMiner malware

Dubbed CookieMiner on account of its cookie-stealing capabilities, this newly discovered malware is believed to be based on DarthMiner, another recently detected Mac malware that combines the EmPyre backdoor and the XMRig cryptominer.

Like DarthMiner, CookieMiner uses the EmPyre backdoor for post-exploitation control. This agent checks if the Little Snitch application firewall is running on the victim’s host and if it is, it stops and exits. It can also be configured to download additional files.

The mining software mines Koto, a Zcash-based anonymous cryptocurrency associated with Japan.

But the most interesting thing about CookieMiner is that it is capable of stealing:

  • Chrome and Safari browser cookies associated with popular cryptocurrency exchanges and wallet service websites (Binance, Coinbase, Poloniex, Bittrex, Bitstamp, MyEtherWallet, etc.);
  • Usernames, passwords and credit card credentials saved in Chrome;
  • Cryptocurrency wallet data and keys; and
  • iPhone’s text messages (if backed up on the Mac).

“If only the username and password are stolen and used by a bad actor, the website may issue an alert or request additional authentication for a new login. However, if an authentication cookie is also provided along with the username and password, the website might believe the session is associated with a previously authenticated system host and not issue an alert or request additional authentication methods,” the researchers explained.

To get around past the authentication process that involved 2-factor authentication, CookieMiner also tries to steal text messages that deliver the second authentication factor.

How worried should Mac users be?

Jen Miller-Osborn, Deputy Director of Threat Intelligence (Unit 42) at Palo Alto Networks, told Help Net Security that they do not know if the attackers wielding the malware have been successful, but they feel there is only a very small chance of success of bypassing multi-factor authentication for these sites by using this approach.

Another unknown is how the malware is pushed on victims. But the researchers believe that, like in DarthMiner’s case, users are tricked into downloading the malicious software (i.e., they believe that they are downloading legitimate software or a pirated version of a legitimate app).

Palo Alto Networks has released indicators of compromise and C&C information that can help users and administrators detect active infections.


from Help Net Security http://bit.ly/2DKBH45

Google also abused its Apple developer certificate to collect iOS user data

It turns out that Google, like Facebook, abused its Apple Enterprise Developer Certificate to distribute a data collection app to iOS users, in direct contravention of Apple’s rules for the distribution program.

Google Apple developer certificate

Unlike Facebook, though, the company did not wait for Apple to revoke their certificate. Instead, they quickly to disabled the app on iOS devices, admitted their mistake and extended a public apology to Apple.

Google’s app

Google’s Screenwise Meter app is very similar to the Facebook Research app, although Google says that it has no access to encrypted data in apps and on devices.

Screenwise Meter was first launched in 2012 and is part of Google’s Opinion Rewards programs. Like Facebook, Google requires users to install and trust its enterprise certificate and pays users to install tracking apps on their mobile phone, web browser, router and TV. The aim is, of course, to see which apps users use, which websites they visit, and so on, so that the company can decide which products to acquire, create or improve and in which way.

“Originally, Screenwise was open to users as young as 13, just like Facebook’s Research app that’s now been shut down on iOS but remains on Android,” TechCrunch reported. Now it can be used by them only if they are “secondary panelists.

Another thing that’s different between the two apps is that Google was much more frank about the app’s capabilities and about the data it collects and how. Also, users could switch to a “guest mode” for those instances when they did not want their activity to be monitored.

Apple’s reaction

Apple’s revocation of Facebook’s Enterprise Developer Certificate came as a bit of a surprise to the public and to Facebook, who probably counted on getting just a light slap on the wrist. Instead, the company lost the ability to distribute apps through Apple’s Enterprise program and its internal iOS apps/betas stopped working.

It’s likely that this is just a temporary setback and it’s very unlikely that Apple will boot Facebook’s apps from its App Store.

Still, Apple had to be seen doing something about such an egregious effort to skirt their rules and, obviously, their threat of revoking the enterprise certificates of any developer using them to distribute apps to consumers was enough for Google to go into appeasement mode.

“The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize,” the company said. It now remains to be seen if that will be enough to avoid the same punishment, which could be very detrimental to Google’s product development and daily work flow.

The fallout

As mentioned before, Facebook employees’ day-to-day work has been made more difficult by the certificate revocation.

On the other hand, this revelation and other privacy scandals before it did not have an adverse effect on the number of users the platform musters or its financial performance.

Obviously, users are not deterred by all of these disclosures, but regulators and legislators are increasingly making noise about it.

This latest revelation has spurred US Senator Ed Markey to berate Facebook about offering teens money in exchange for their personal information when they don’t have a clear understanding of how much data they’re handing over and how sensitive it is.

“I strongly urge Facebook to immediately cease its recruitment of teens for its Research Program and explicitly prohibit minors from participating. Congress also needs to pass legislation that updates children’s online privacy rules for the 21st century. I will be reintroducing my ‘Do Not Track Kids Act’ to update the Children’s Online Privacy Protection Act by instituting key privacy safeguards for teens,” the Senator stated.

“But my concerns also extend to adult users. I am alarmed by reports that Facebook is not providing participants with complete information about the extent of the information that the company can access through this program. Consumers deserve simple and clear explanations of what data is being collected and how it being used.”


from Help Net Security http://bit.ly/2BdvU5D

Wednesday, January 30, 2019

Taking ethical action in identity: 5 steps for better biometrics


Glance at your phone. Tap a screen. Secure access granted!

This is the power of biometric identity at work. The convenience of unlocking your phone with a fingertip or your face is undeniable. But ethical issues abound in the biometrics field.

The film Minority Report demonstrated one possible future, in terms of precise advertising targeting based on a face. But the Spielberg film also demonstrated some of the downsides of biometrics – the stunning lack of privacy and consumer protection.

What’s fascinating is that many of these concerns were anticipated over a century ago. In 1890, Louis Brandeis advocated privacy protection when he co-authored an article with colleague Samuel Warren in the Harvard Law Review advocating “the right to be let alone.” Brandeis, a future Supreme Court Justice, stated then that the development of “instantaneous photographs” and their dissemination by newspapers for commercial gain had created the need for a new “right to privacy.”

Today, technology has potentially swamped that right to privacy. From one public CCTV to the next, a long-term history can be stitched together from multiple video sessions to create one end-to-end picture of an individual’s journey. The owner of a shopping mall or private entertainment facility could easily track behavior from store to store, delivering specific data to store owners and making predictive findings on your behavior over time.

There’s a fix for the Minority Report problem: transparency. Companies who control biometrics should be transparent about what they are collecting, how it is collected and stored and the potential for abuse or mis-identification. If an error occurs, companies should be transparent with that data and provide a publicly available fix for that mistake.

Just as you have a right to know what Facebook is collecting on you, you should also have the right to know how your face can be identified by what company and for what purpose. You shouldn’t be surprised if recognized in a crowded public place, and you should know if law enforcement has access to that data.

The degree to which your shopping behavior is “private” is arguable, but it is inarguable that we should discuss this topic rather than just letting commercial terms dictate what the world knows about us.

Unfortunately, we don’t have a good grounding today in what an informed public discussion looks like. A recent Pew study demonstrated that 74% of the American public doesn’t understand that Facebook is targeting advertising to individuals based on a profile they’ve built of your interests. This is not the fault of the consumer: this is a problem caused by tech companies who have not served the public with full transparency and open information.

All of these ethical issues can be addressed, but we need to start now. Here are some basic steps that can assist you and your team in anticipating and addressing potential ethical issues.

1. Put humans in the loop: First, we should ensure that a human being is always in the loop. Human beings are not immune to errors or biases, but having a qualified person review your facial or fingerprints file to determine if it’s correct should be a standard practice. Today, it is not, and far too many people are mis-identified by faulty machine logic. Machines should not determine where the boundaries of personal freedom, privacy or human rights exist.

2. Limit government surveillance: Laws and regulations should limit the use of surveillance or publicly-gathered biometric data (such as facial recognition or latent prints gathered by optical sensors). The limitations should include protections for human life or within the allowances made by court orders.

3. Build systems that don’t discriminate: It’s easy to say “don’t discriminate”, but the reality is harder. When designing a machine learning system, caution should be taken during system design to acknowledge possible bias and course-correct for that bias by testing with different populations and with different cultures and in different regions. Companies who use biometric systems should be held to account for how their algorithm might encourage inadvertent discrimination against individuals.

4. Be open and transparent: Companies should be crystal clear with consumers about the intended use of a person’s biometric data, and not extend that usage to areas where it was not initially intended. Always ask a consumer, always respect the response, and don’t abuse the user’s trust. Many companies are surprised by how far consumers will go, when they are properly and fully informed.

5. Clarify what consent means: Laws and local regulations should specify that consumers both understand the use case, and agree to allow surveillance or biometric gathering when they enter a store or use an online service.

The path towards creating and supporting best-in-class technology doesn’t just begin by writing some code or designing hardware. Instead, your technical system often emerges from a thicket of ambiguous and ever-changing customer needs. Hidden in those needs are also a set of unstated ethical quandaries. When you deliver a system that uses biometrics for identification and access, you open up one or more ethical question. To make your system responsive to consumer concerns, it is always important to anticipate apprehensions, open yourself to listening to questions, deliver data on your planned usage, and provide full details on exactly what you are doing with biometric data.

These steps should assist you in delivering systems that people not only use every day, but trust implicitly with their most personal and private information.


from Help Net Security http://bit.ly/2Tpz7pU

Microsoft rolls out new tools for enterprise security and compliance teams

Microsoft has announced a number of new capabilities and improvements for tools used by enterprise administrators.

New Microsoft 365 security and compliance centers

The new Microsoft 365 security center allows security administrators and other risk management professionals to manage and take full advantage of Microsoft 365 intelligent security solutions for identity and access management, threat protection, information protection, and security management.

Microsoft 365 enterprise security

The new Microsoft 365 compliance center allows compliance, privacy, and risk management professionals to assess their organization’s compliance risks, protect and govern the organization’s data with sensitivity and retention labels, respond to regulatory requests and provides access to additional compliance and privacy solutions.

“The new specialized workspaces enable your security and compliance teams to have centralized management across your Microsoft 365 services, bringing together Office 365, Windows 10, and Enterprise Mobility + Security (EMS), with several Azure capabilities,” Microsoft noted.

The security center provides visibility into the organization’s security posture and allows admins to determine the actions best suited to improving it. The compliance center allows the discovery of compliance risks, shadow IT, employees’ non-compliant behaviors, etc.

Both centers also have a new alerts page through which admins can review Microsoft Cloud App Security alerts related to Office apps and services. (The alerts can be also integrated with the company’s SIEM or self-created solution.)

Both centers will be available to admins worldwide by the end of March.

A new compliance supervision solution

Microsoft is also rolling out a new supervision solution for monitoring digital communications for regulatory compliance, compliance to corporate policies, and to identify and manage legal exposure and other risks before they damage corporate reputation and operations.

“With Supervision policies, you can monitor internal or external Exchange email, Microsoft Teams chats and channels, or 3rd-party communication in your organization,” the company explained.

Admins can pinpoint things like the use of inappropriate language, flag sensitive information (financial, medical and health or privacy), etc. Many of the actions available – message filtering, reviewing, tagging, resolving of items, auditing, reporting – can be performed via the security and compliance centers.

Microsoft will finish rolling out the new Supervision updates in the next few weeks. The company notes, though, that all users monitored by supervision policies must have either an Office 365 Enterprise E3 license with the Advanced Compliance add-on or be included in an Office 365 Enterprise E5 subscription.


from Help Net Security http://bit.ly/2B6lu7R

eCommerce credit card fraud is nearly an inevitability

Riskified surveyed 5,000 US-based consumers aged 18 and older about their online shopping behaviors, experience with and prevalence of credit card fraud, repeat shopping likelihood and customer satisfaction to develop a full picture of how consumers react to a number of common shopping experiences.

eCommerce credit card fraud

The results are worrisome for both consumers and merchants, as roughly half of respondents reported experience with credit card fraud and 30% had their purchase wrongly declined, with a corresponding negative impact on their satisfaction and return shopping.

For US consumers, eCommerce credit card fraud is nearly an inevitability. Overall, 49% of consumers surveyed reported having been a victim of credit card fraud, where their card information was illegally used by someone else. But that percentage grew with age, suggesting that becoming a victim is only a matter of time. Among all respondent groups aged 31 or older, a majority of consumers were the victims of credit card fraud.

Unfortunately for merchants, the obvious costs of fraud aren’t the only costs. 49% of customers reported that they do not return to an online retailer after a fraud incident has taken place, meaning that the merchant will pay the cost of the fraud and lose future customers.

But that’s only part of the cost of fraud. Merchants often decline orders out of caution, and previous research conducted by Riskified found that fear of fraud costs even more than the fraud itself, as merchants unnecessarily reject good customers. This survey bears that out, as 30% of respondents reported having an order declined, and 57% of those declines happen to returning customers, squandering the good will merchants had built. The survey further found that roughly 42% of shoppers who experienced a decline moved on, either abandoning the purchase completely (28%) or shopping with a competitor instead (14%).

Even shoppers who aren’t declined may move away from a purchase. 84% of respondents reported abandoning an order before completing the purchase, with many of these shoppers blaming the checkout process. 37.3% abandoned a purchase because of a complicated checkout, while 34.9% blamed a bad mobile experience.

“It’s really difficult for any single retailer to effectively manage their fraud, and this survey shows just how damaging it is when they fail to do so,” said Eyal Raab, vice president of business development. “Merchants need to be able to meet their customers where and how they want to shop, but offering options like omnichannel fulfillment or digital gift cards opens them up to threats. Making accurate decisions and approving good orders not only increases revenue now, it also makes happier, more loyal customers in the future.”

Impact of household income on fraud and reimbursement:

  • 48% of households with an annual income of $1M or more have reported legitimate purchases as fraudulent. This was by far the highest level of false claims of fraud, with no other income bracket even reaching 40%.
  • Meanwhile, lower income households were least likely to be reimbursed for charges fraudulently made with their cards. Only 35% of lower income households were refunded the full amount of the fraudulent activity.

Customers blame merchants for fraud:

  • Among victims of credit card fraud, more than 1 in 4 (29%) blamed the merchant that approved the fraudulent purchase.

Friction leads to cart abandonment:

  • Cart abandonment continues to be a big problem for merchants, and 84% of survey respondents reported abandoning a purchase in progress.
  • While some of that is unavoidable for merchants – unexpected shipping costs and a change of heart led to significant cart abandonment – a difficult checkout process is often the culprit. More than 71% of cart abandoners blamed the checkout process – for being overly complicated, not mobile optimized or seeming untrustworthy – as the reason they abandoned their purchase.

Shoppers watch their wallets:

  • 38% of respondents admitted they have or may have created multiple email addresses to gain additional online shopping discounts. While not illegal, this type of discount abuse can seriously impact merchants’ bottom lines.

from Help Net Security http://bit.ly/2Wxm2wo

Free training course material on network forensics for cybersecurity specialists

Based on current best practices, the training includes performance indicators and means that will help those who take it increase their operational skills of tackling cyber incidents.

free training course material on network forensics

Network forensics is more important than ever, since more and more data is sent via networks and the internet. When there is a security incident, network forensics can help reduce the time needed to go from Detection to Containment – an essential step in any major security incident.

When used proactively, network forensics provides a better picture of what your network’s ‘normal’ traffic looks like, leading to more intelligent alerting and less false positives.

ENISA makes available a ready-to-use version, including manuals for trainers and students, and provides tools and data related to exercise scenarios through Virtual Machines.

The training consists mainly of exercises focused on logging and monitoring, detection, and analysis or data interpretation. For example, one exercise deals with an attack on an ICS/SCADA environment in the energy sector. It starts with the preparation phase and it is followed by the incident analysis and post-incident activity.

Other scenarios within the training refer to how to detect “exfiltration” in a large finance corporation environment, or the analysis of an airport third-party VPN connection compromise.


from Help Net Security http://bit.ly/2Rqbzze

Keysight Technologies introduces solution for PCI Express 5.0 technology

Keysight Technologies released transmitter (Tx) and receiver (Rx) testing solution providing the speed and margins needed to meet the Peripheral Component Interconnect or PCI Express 5.0 Technology (PCIe Gen5) standard.

With many 5G wireless devices reported to launch in 2019, the computer/server industry is working to upgrade and enhance network speed with technologies such as 400G Ethernet. PCI Express 5.0 technology is required for computer servers to support the bandwidth of 400G networks, as it is the input/output (I/O) interconnect that has the throughput necessary to support the 400G interface.

Designing integrated circuits and systems utilizing this version of the PCIe standard presents engineering challenges for developers. Keysight’s PCIe 5.0 transmitter and receiver solution enables engineers with the tools necessary to achieve the speed and margins required to meet the standard, with upgradability for future investment protection.

“Data centers need to be upgraded to the next 400G speed rates for operators to offer new services, while preserving quality, meeting the ever-increasing data and storage demands, and minimizing costs,” said Dr. Joachim Peerlings, vice president and general manager of Keysight’s Networks and Data Centers.

“Physical layer transmitter and receiver test tools capable of testing at 32GT/s enable designers to optimize their transmitter, receiver, and channel designs for maximum performance and reliability at the required increased data transfer rate.”

The Keysight PCIe 5.0 receiver test solution enables the design and validation of circuits capable of tolerating attenuated signals at 32 GT/s. PCIe 5.0 utilizes equalization techniques helping the receiver restore the quality of the transmitted signal, allowing for error-free recovery of the digital information from the PCIe Tx signal.

Due to these data transfer rates, PCIe 5.0 receivers must accept a degraded signal resulting from the channel’s high-frequency loss characteristics, resulting in unacceptable bit error ratios (BERs). To address this, Keysight has developed the M8040A High Performance 64Gbaud BERT, enabling physical layer characterization and compliance testing, to characterize receiver performance margins on the physical layer.

In addition, the increase in digital transmission speed and throughput introduce signal integrity challenges such as issues with connector crosstalk and receiver jitter sensitivity, driving the need for an accurate oscilloscope and receiver test solution.

Keysight’s Infiniium UXR-Series oscilloscope provides bandwidth and noise floor performance to maximize margins and meet the challenges presented by the PCIe 5.0 standard, with upgradeability to 110 GHz.


from Help Net Security http://bit.ly/2G0uzmK