Sunday, March 31, 2019

How to Marie Kondo your data


By now you’ve heard about Marie Kondo, the author of New York Times bestseller, The Life Changing Magic of Tidying Up, and star of Tidying Up, the new Netflix show that puts her principles of organization and decluttering into practice in family homes throughout Los Angeles.

While the #KonMariMethod has put households across America in an organizing frenzy, we found that her tidying principles can also be applied to solve a core challenge for the business world: too much data.

Businesses ingest enormous amounts of personal data, every day. Sometimes this data is critical for business operations (e.g., user behavior), human resources (e.g., hours worked or pay accrued) or generating revenue (e.g., new users), but oftentimes, it’s not.

Chances are, there are countless data records stored in different internal databases or third-party systems that hold no business utility for your company. But unlike a drawer full of mismatched socks, excessive personal data can carry liability, and risk for businesses that continue to house it.

New data protection regulations, like the European Union’s General Data Protection Regulation (GDPR), and the upcoming California Consumer Protection Act (CCPA) are introducing new standards for how personal data is processed by companies. In most cases, businesses need explicit consent from users that collecting their personal data is OK. Otherwise, the burden is on your business to prove that your business interests override their data privacy rights.

What would Marie do? Minimize your data

The path to compliance with data protection laws always begins in the same place: data inventory and data minimization. We’ve adapted some of Marie Kondo’s principles to the process of organizing your company’s personal data.

Your goal with this exercise is to determine the data you need and delete the rest. The phrase “data minimization” appears throughout GDPR, and is a good practice for any business that aims for good data governance. By reducing your data stores to include only that which is essential, the risk of exposing sensitive data or missing an important data record when fulfilling a data request is dramatically reduced.

We begin by putting all of your data in one place.

If you’ve watched Tidying Up, you know the moment of reckoning with clutter starts early on, when her clients pile every piece of clothing they own on their bed, and are forced to face the reality of their closets. Now imagine if you could open the doors to your systems and databases and pile all of that personal data in one place?

Each company must take the time to collect all the personal data in their data stores so they can begin sorting through the clutter. Until your company is able to truly visualize a data inventory, it is impossible to optimize data processing, which is fundamental to complying with data protection laws like GDPR.

Once the data is in one place and you’re ready to begin a data minimization exercise in earnest, you will sort by category, not by location. This may feel counterintuitive at first, because companies often think of their data in terms of the systems where records are stored. It may be tempting to start by purging extra data from one database at a time. But just like ancient tubes of Chapstick live on your desk and nightstand, business often store duplicate data records in different systems, because the information could be useful to different teams for different purposes. When personal data is duplicated and dispersed throughout a number of databases, the risk increases for your company. Aggregating those data records is complicated, but critical for data protection.

Start with one category at a time, and discard all at once. Until your company is able to look at data by category, it is impossible to truly see the scope of personal data and understand your risk profile. One or two tubes of Chapstick in different places around the house may seem reasonable, but it takes putting all your Chapstick in the same place to realize you have 15 tubes scattered throughout the house. Similarly, if your siloed teams check their respective databases and see roughly 30 expired credit card numbers in each, the scale of the problem is less apparent than when you see 210 expired credit card numbers are stored across all the databases.

The impulse to begin a data minimization project by removing different categories of data from one database at a time is instinctual, but it often obscures the scale of clutter and in the end is a circular endeavour. Chances are high that the same categories of data lives in multiple databases, and you’ll be forced to revisit the same location countless times trying to delete different categories of data later on.

Do I need this data?

In Tidying Up, you evaluate each individual piece of clothing, piece by piece. If the item “sparks joy,” it can stay. If it does not spark joy, it goes. Looking at each data record individually is unrealistic, but the spirit is the same. Focus on the data that your business needs to keep, then delete or anonymize the rest. A good place to start is the law itself — audit the data you collect and if you can’t justify any individual category, then you have an obligation to delete that data. For the data you do keep, make sure that the data was collected with explicit consent, and is compliant with your regulatory obligations. When reviewing data records with your team, ask “do we need this data?”. If not, remove it.

Ideally, the big push for organizing your company’s personal data stores only happens once. In order to maintain data stores that are organized and compliant, establish transparent data collection policies with the public, and clear data retention policies internally.

Define rules around what categories of data are collected and stored, and for how long the data is stored. No personal data record should be stored indefinitely.

The stakes are high

Processing personal data is riskier than ever now that GDPR has come into effect. France fined Google $57 million for violating GDPR, and many other tech giants face similar complaints. Companies like Google and Amazon will survive the steep fines levied by authorities, but growth-stage enterprises might not. The authorities are not going to let violations slide, making the risk of a penalty very real. Tidying up your data is essential to compliance and the health of your business and ought to be a top priority for your business this year.


from Help Net Security https://ift.tt/2WDQVii

Nearly all consumers are backing up their computers, but data loss is here to stay

65.1 percent of consumers or their family member lost data as a result of an accidental deletion, hardware failure or software problem – a jump of 29.4 percentage points from last year.

consumer backup practices

Yet for the first time in its four-year history nearly all consumers (92.7 percent) are backing up their computers – an increase of more than 24.1 percent from last year and the single largest year-over-year increase, as shown in the Acronis’ 2019 World Backup Day Survey.

“At first glance, those two findings might seem completely incompatible – how can more data be lost if nearly everyone is backing up,” said James Slaby, Director, Cyber Protection at Acronis.

“Yet there are hints at why these numbers look this way in the survey. People are using more devices and accessing their data from more places than ever before, which creates more opportunities to lose data. They might back up their laptop, but if they didn’t back up the smartphone they just forgot in a cab, they’re still losing data.”

The Acronis survey targeted users in the U.S., U.K., Australia, Germany, Poland, Spain, France, Japan, Singapore, Bulgaria, and Switzerland, polling consumers and, for the first time, business users.

The increasing number of CEOs, CIOs and other executives losing their jobs as a result of data breaches, online attacks, and IT missteps prompted Acronis to incorporate their data concerns and practices into the study.

The addition of business users revealed several differences in how and why consumers and corporations protect their digital assets.

Only 7% of consumers don’t even try to protect their data

The number of devices being used by consumers continues to climb, with 68.9 percent of households reporting they have three or more devices – including computers, smartphones, and tablets. That’s up 7.6 percent from 2018.

Given the amount of data used and the stories about people losing their homes to fires and floods, as well as data losses due to high-profile ransomware attacks and security breaches, the increase in reported backups suggests consumers are at least trying to protect their data.

This year, only 7 percent of consumers said they never back up, while nearly a third of last year’s respondents (31.4 percent) did not back up their personal data.

They also report valuing their data more, with 69.9 percent reporting they would spend more than $50 to retrieve lost files, photos, videos, etc. Last year, fewer than 15 percent would have paid that much.

To protect their data, 62.7 percent of consumers keep it nearby, backing up to a local external hard drive (48.1 percent) or to a hard drive partition (14.6 percent). Only 37.4 percent are using the cloud or a hybrid approach of cloud and local backups.

The lack of cloud adoption presents another apparent disconnect. Consumers overwhelmingly said that having access to data is the greatest benefit of creating backups, with a significant majority selecting “fast, easy access to backed up data, wherever I am” above all other features.

Yet just over a third of them back up to the cloud, which would give them the ability to retrieve files from anywhere.

The type of data consumers are most concerned with losing seems to be contacts, passwords, and other personal information (45.8 percent), followed by media files like photos, videos, music, and games (38.1 percent).

Fewer than half of consumers are aware of the online attacks that threaten their data, such as ransomware (46 percent), cryptomining malware (53 percent) and the social engineering attacks (52 percent) used to spread malware. Education on these dangers seems to be slow, as the number of consumers who know about ransomware only increased 4 percent since last year.

Companies aggressively protect data in the cloud

Since an hour of downtime costs an estimated $300,000 in lost business, corporate users clearly know the value of their data. And since CEOs and C-level executives are being held more accountable for their data protection and security postures in the wake of a rash of high-profile incidents, leadership is taking a more active interest.

That helps explain why the business users who responded to Acronis’ World Backup Day Survey are already prepared to protect their files, apps, and systems – with a significant majority citing safety and security of their data as the benefits that are most important to them, ranking them first and second, respectively.

The 2019 survey is the first time companies were included in the annual poll, and the responses came from businesses of all sizes – including 32.7 percent small businesses with under 100 employees, 41.0 percent medium businesses with between 101 and 999 employees, and 26.3 percent large enterprises employing 1,000 people or more.

Regardless of the company size, the majority make protecting their data a priority by backing up their company data monthly (35.1 percent), weekly (24.8 percent), or daily (25.9 percent). As a result, 68.7 percent said they did not suffer a data loss event during the past year that resulted in downtime.

These companies were also clearly aware of the latest risks to their data, which is why they reported being either concerned or highly concerned about ransomware (60.6 percent), cryptojacking (60.1 percent), and social engineering attacks (61.0 percent).

In practice, businesses of all sizes rely on cloud backups, with 48.3 percent using the cloud exclusively and 26.8 percent using a combination of local and cloud backup.

Given their data protection concerns of safety and security, their reliance on the cloud is understandable. That’s because for safety (“reliable backups so data always available for recovery”), an offsite cloud backup ensures their data survives even if a fire, flood or natural disaster destroys their facilities.

In terms of security (“data protected against online threats and cybercriminals”), the cloud provides a buffer that such malware attacks have difficulty breaching.

Cyber protection recommendations

Whether you are concerned about personal files or securing your company’s business continuity, Acronis has four simple recommendations to help protect your data:

  • Always create backups of important data. Keep copies of the backup both locally (so it’s available for fast, frequent recoveries) and in the cloud (to guarantee you have everything if a fire, flood, or disaster hits your facilities).
  • Ensure your operating system and software are current. Relying on an outdated OS or app means it lacks the bug fixes and security patches that help block cybercriminals from gaining access to your systems.
  • Beware of suspicious email, links, and attachments. Most virus and ransomware infections are the result of social engineering techniques that trick unsuspecting individuals into opening infected email attachments or clicking on links to websites that host malware.
  • Install anti-virus software and enable automatic updates so your system is protected against common, well-known malware strains. Windows user should confirm that their Windows Defender is turned on and up-to-date.

from Help Net Security https://ift.tt/2YGgvVl

Main source of threat to industrial computers? Mass-distributed malware

Malicious cyber activities on Industrial Control System (ICS) computers are considered an extremely dangerous threat as they could potentially cause material losses and production downtime in the operation of industrial facilities.

threat to industrial computers

Attack workflow

In 2018, the share of ICS computers that experienced such activities grew to 47.2 percent from 44 percent in 2017, indicating that the threat is rising.

According to the new Kaspersky Lab ICS CERT report, the top three countries in terms of the percentage of ICS computers on which Kaspersky Lab prevented malicious activity were the following: Vietnam (70.09%), Algeria (69.91%), and Tunisia (64.57%). The least impacted nations were Ireland (11.7%), Switzerland (14.9%), and Denmark (15.2%).

“Despite the common myth, the main source of threat to industrial computers is not a targeted attack, but mass-distributed malware that gets into industrial systems by accident, over the internet, through removable media such as USB-sticks, or emails,” said Kirill Kruglov, security researcher at Kaspersky Lab ICS CERT.

“However, the fact that the attacks are successful because of a casual attitude to cybersecurity hygiene among employees means that they can potentially be prevented by staff training and awareness – this is much easier than trying to stop determined threat actors.”

Kaspersky Lab ICS CERT recommends implementing the following technical measures:

  • Regularly update operating systems, application software on systems that are part of the enterprise’s industrial network.
  • Apply security fixes to PLC, RTU and network equipment used in ICS networks where applicable.
  • Restrict network traffic on ports and protocols used on edge routers and inside the organization’s OT networks.
  • Audit access control for ICS components in the enterprise’s industrial network and at its boundaries.
  • Deploy dedicated endpoint protection solutions on ICS servers, workstations and HMIs.
  • Make sure security solutions are up-to-date and all the technologies recommended by the security solution vendor to protect from targeted attacks are enabled.
  • Provide dedicated training and support for employees as well as partners and suppliers with access to your network.
  • Use ICS network traffic monitoring, analysis and detection solutions for better protection from attacks potentially threatening technological process and main enterprise assets.

threat to industrial computers

Possible initial infection paths

“The Kaspersky report shows a continued interest in industrial targets, and shows that attacks on small cloud vendors are pivoting into large enterprise targets. As the Industrial Internet of Things (IIoT) gathers momentum, we are predicting that these two trends will converge – that compromised IIoT cloud providers will provide a new way to pivot into industrial systems,” Andrew Ginter, VP Industrial Security, Waterfall Security Solutions, told Help Net Security.


from Help Net Security https://ift.tt/2HReyAu

Organizations investing in security analytics and machine learning to tackle cyberthreats

IT security’s greatest inhibitor to success is contending with too much security data. To address this challenge, 47 percent of IT security professionals acknowledged their organization’s intent to acquire advanced security analytics solutions that incorporate machine learning (ML) technology within the next 12 months.

future cyber defense investments

Such investments help to mitigate the risks of advanced cyberthreats missed by traditional security defenses, aiding enterprise cyberthreat hunting endeavors, according to the CyberEdge Group sixth annual Cyberthreat Defense Report (CDR).

With 1,200 IT security decision makers and practitioners participating from 17 countries, six continents, and 19 industries, CyberEdge’s CDR is the most comprehensive study of security professionals’ perceptions in the industry.

Key findings

The 2019 CDR yielded dozens of insights into the challenges IT security professionals faced in 2018 and the challenges they’ll likely continue to face for the rest of this year. Key findings include:

  • Hottest security technology for 2019. Advanced security analytics tops 2019’s most wanted list not only for the security management and operations category, but also for all technologies in this year’s report.
  • Machine learning garners confidence. More than 90 percent of IT security organizations have invested in ML and/or artificial intelligence (AI) technologies to combat advanced threats. More than 80 percent are already seeing a difference.
  • Attack success redux. The percentage of organizations affected by a successful cyberattack ticked up this year, from 77 percent to 78 percent, despite last year’s first-ever decline.
  • Caving in to ransomware. Organizations affected by successful ransomware attacks increased slightly from 55 percent to 56 percent. More concerning, the percentage of organizations that elected to pay ransoms rose considerably, from 39 percent to 45 percent, potentially fueling even more ransomware attacks in 2019.
  • Container security woes. For the second year, application containers edge mobile devices as IT security’s weakest link.
  • Web application firewalls rule the roost. For the second year, the web application firewall (WAF) claims the top spot as the most widely deployed app/data security technology.
  • Worsening skills shortage. IT security skills shortages continued to rise, with 84 percent of organizations experiencing this problem compared to 81 percent a year ago.
  • Security’s slice of the IT budget pie. On average, IT security consumes 13 percent of the overall IT budget. The average security budget is going up by 5 percent in 2019.

future cyber defense investments

Security analytics and machine learning could very well hit their stride in 2019,” said Steve Piper, CEO of CyberEdge Group.

“We surveyed our research participants on their intended cyber investments across a broad range of security technologies. Respondents identified ‘advanced security analytics with machine learning’ as the most-wanted security technology for the coming year. This makes sense, given that ‘too much data to analyze’ surpassed ‘lack of skilled personnel’ as the greatest inhibitor to IT security’s success.”

“A decade after the transformative Aurora attack, you have to start wondering how long organizations can sustain such elevated investments in cybersecurity. Beanstalks don’t grow to the sky, right?” said Mike Rothman, president of Securosis.

“Yet, the data tells another story. According to this year’s CDR report, the average security budget consumes 13 percent of the overall IT budget, up from 5 percent just two decades ago. And it continues to grow, with an average of 5 percent planned growth moving forward. Exacerbated by the critical shortage of qualified IT security personnel, there will be a continued focus on smart investment in technologies that make security more effective and efficient.”


from Help Net Security https://ift.tt/2HQjANy

Security and privacy still the top inhibitors of cloud adoption

Cloud adoption is gaining momentum, as 36 percent of organizations are currently in the process of migrating to the cloud while close to 20 percent consider themselves to be in the advanced stages of implementation.

Top cloud challenges

inhibitors of cloud adoption

Due to the number of ways data is stored and the amount of time it takes to migrate these sources to the cloud, hybrid cloud is the most common and popular architecture (46 percent) followed by private cloud, multi-cloud and public cloud respectively, according to the second annual cloud usage survey conducted by Denodo.

Surveying 201 business executives and IT professionals from a diverse group of technical backgrounds, organizations are adopting cloud computing in an effort to become more agile, lower IT costs, and have the ability to scale.

The top cloud providers for 2018 have maintained their positions with AWS leading the pack (67 percent) followed by Microsoft Azure (60 percent) and Google Cloud (26 percent). Businesses are leveraging these providers to support BI and analytics, followed by data lake formation and hybrid integration for AWS and data warehouse and hybrid integration for Azure.

In terms of services offered, data warehouse modernization and data lakes are common migration use cases, as the center of data gravity slowly concentrates around the cloud.

While cloud adoption is on the rise, it’s not without its challenges as security remains the top concern (52 percent) followed by managing and tracking cloud spend (44 percent) and a lack of cloud skills (32 percent). Despite these concerns, four out of ten said they would re-factor or re-architect their applications to take advantage of cloud computing.

Do you store sensitive data in the public cloud?

inhibitors of cloud adoption

Containers are gaining in importance, with Docker containers being the most used (31 percent) followed by Kubernetes (21 percent). Finally, interest in cloud marketplaces continued to grow as nearly three out of five (59 percent) expressed interest in these pay-as-you-go subscription models, followed by their ability to support self-serviceability (48 percent) and a lower cost of entry (40 percent).

With a mix of on-premises and cloud-based data sources and types, many businesses are turning to data virtualization (DV) solutions to take advantage of the agility and flexibility that the cloud provides, and to ensure business professionals can apply the data found in these growing mixed environments.

As a real-time, agile, data integration methodology, DV provides a logical view of all enterprise data without having to replicate information into a physical repository, which saves organizations time, money, and resources.


from Help Net Security https://ift.tt/2V8ddIp

Automatically and invisibly encrypt email as soon as it is received on any trusted device

While an empty email inbox is something many people strive for, most of us are not successful. And that means that we probably have stored away hundreds, even thousands, of emails that contain all kinds of personal information we would prefer to keep private.

Easy Email Encryption

What users see E3 as their insecure email + their devices = secure encrypted email.

Current defenses, such as Pretty Good Privacy (PGP) and Secure/Multipurpose Internet Mail Extensions (S/MIME), rely on public key cryptography that uses pairs of public and private keys generated by cryptographic algorithms.

Because these systems are too technical and difficult for the average user, most people don’t use them. As a result, many email accounts have been hacked, including such high profile cases as the phishing attack on Hillary Clinton’s top campaign advisor John Podesta and the 2016 email hack of one of Vladimir Putin’s top aides.

In response to these kinds of widespread attacks, computer scientists at Columbia Engineering have built Easy Email Encryption (E3), an application for secure, encrypted email that is easy to manage even for non-technical users.

Now in beta test mode, E3 automatically and invisibly encrypts email as soon as it is received on any trusted device, including smartphones, laptops, and tablets. It works on a variety of platforms including Android, Windows, Linux, and Google Chrome, and with popular mail services such as Gmail, Yahoo, AOL, and more.

The team–Professors Jason Nieh and Steve Bellovin and their PhD student John S. Koh–presented its study today at EuroSys ’19 in Dresden, Germany, one of the world’s top forums focused on computer systems software research and development.

“Email privacy grows ever more critical as our email inboxes increase in size,” notes Koh, the paper’s lead author. “Thanks to free and widely popular mail services like Gmail, users are keeping more and more emails, thus providing a one-stop shop for hackers who can compromise all of a user’s emails with a single successful attack.”

Ever since 1999, when the seminal “Why Johnny Can’t Encrypt” paper showed how extraordinarily hard it was for people to send encrypted email, researchers have been trying to design encryption systems that are easier for the average user to manage.

The problem is that they have stayed focused on end-to-end encryption solutions, where only the original sender and recipient can read the messages. Third-parties, including telecommunications and Internet providers, cannot eavesdrop as they cannot access the cryptographic keys to decrypt the conversation.

While these solutions certainly work and offer the most security, PGP and S/MIME, the encryption solutions most favored by experts, are so complex that they are impractical, almost unusable, for a non-technical user.

“The field of email security is just begging for improvement,” Koh notes. “For 20 years, the research community was fixated on end-to-end security. We took a different tack, positing that end-to-end encryption for email is not needed in the 21st century. Internet connections are increasingly protected by default using encryption.”

“Our insight is that email needs to be protected when it’s stored in our inboxes, not when it’s being sent over the Internet, because hackers are mainly just trying to log into your email account. We thus apply an ‘encrypt on receipt’ model that provides excellent real-world security while being far more usable than end-to-end encryption.”

Over the past three years, the Columbia Engineering team refined E3, trying many different approaches before finding a method that checked all the boxes they needed. They have been testing the app with a couple dozen study participants, many of whom were not particularly tech-savvy.

All agreed that E3 is significantly easier to use than the state-of-the-art systems for secure email, to the point where E3 is almost as easy to use as a regular email client.

The team’s new approach simplifies email encryption and improves its usability by implementing receiver-controlled encryption. Newly received messages are transparently downloaded and encrypted to a locally generated key and the original message is then replaced.

A major problem was how to handle multiple devices, especially important these days as most people read email on several devices. Rather than moving private keys around, which is hard to do securely and puts great demands on the user, the researchers used per-device key pairs.

With this approach, only public keys need to be synchronized via a simple verification step. Hackers who successfully attack an email account or server can only gain access to encrypted emails. All emails encrypted prior to a breach are protected.

In E3, public keys are never shared with other people. They are self-generated and self-signed, and require no public key infrastructure for the user to understand. Previous work has shown that users find it confusing to correctly obtain and use public keys.

In contrast, an E3 user needs only self-signed keys, and any public key exchanges among the user’s devices are automated.

The researchers note that they do not intend E3 to be an end-to-end, maximum security solution, but rather a major improvement over the norm that is easy to deploy and use.


from Help Net Security https://ift.tt/2FEM4GC

AWS releases new S3 storage for long-term data retention

Amazon Web Services (AWS), an Amazon.com company, announced the general availability of Amazon S3 Glacier Deep Archive, a new storage class that provides secure, durable object storage for long-term retention of data that is rarely accessed.

At just $0.00099 per GB-month (less than one-tenth of one cent, or $1 per TB-month), S3 Glacier Deep Archive offers the lowest cost storage in the cloud, at prices significantly lower than storing and maintaining data in on-premises magnetic tape libraries or archiving data off-site.

Organizations in many market segments (e.g., financial services, healthcare, and government, etc.) are required to retain data for long periods of time to meet regulatory compliance requirements. In addition, there are organizations, such as media and entertainment companies, that want to keep a backup copy of core intellectual property.

These datasets are often very large, consisting of multiple petabytes, and yet typically only a small percentage of this data is ever accessed—once or twice a year at most. To retain data long-term, many organizations turn to on-premises magnetic tape libraries or offsite tape archival services.

However, maintaining this tape infrastructure is difficult and time-consuming; tapes degrade if not properly stored and require multiple copies, frequent validation, and periodic refreshes to maintain data durability. Additionally, it is difficult or impossible to do machine learning and other types of analysis directly on data stored on tape.

Now, with S3 Glacier Deep Archive, customers with large datasets they want to retain for long periods will be able to eliminate both the cost and management of tape infrastructure, while ensuring that their data is preserved for future use and analysis, such as in oil and gas seismic exploration and developing autonomous vehicles.

Customers can still use S3 Glacier when they want retrieval options in minutes for archive data, while S3 Glacier Deep Archive is ideal for customers who want the lowest cost for archive data that is rarely accessed. In the event that recovery becomes necessary, the objects can be recovered in as little as 12 hours with S3 Glacier Deep Archive versus days or weeks with off-site tape.

“We have customers who have exabytes of storage locked away on tape, who are stuck managing tape infrastructure for the rare event of data retrieval. It’s hard to do and that data is not close to the rest of their data if they want to do analytics and machine learning on it,” said Mai-Lan Tomsen Bukovec, Vice President, Amazon S3, AWS.

“S3 Glacier Deep Archive costs just a dollar per terabyte per month and opens up rarely accessed storage for analysis whenever the business needs it, without having to deal with the infrastructure or logistics of tape access.”

With six different storage class options, Amazon S3 provides the broadest array of cost-optimization options available in the cloud today. All objects stored in S3 Glacier Deep Archive are replicated and stored across at least three geographically-dispersed Availability Zones, designed for 99.999999999% (eleven nines) durability, and can be restored within 12 hours or less.

S3 Glacier Deep Archive also offers a bulk retrieval option that lets customers retrieve petabytes of data within 48 hours. Customers can upload data to S3 Glacier Deep Archive over the internet or using AWS Direct Connect and the AWS Management Console, AWS Storage Gateway, AWS DataSync, AWS Command Line Interface, or the AWS Software Development Kit.

S3 Glacier Deep Archive is integrated with Tape Gateway, a cloud-based virtual tape library feature of AWS Storage Gateway, so customers using it to manage on-premises tape-based backups can choose to archive their new virtual tapes in either S3 Glacier or S3 Glacier Deep Archive. S3 Glacier Deep Archive is available in all AWS commercial and AWS GovCloud (US) Regions.

Deluxe is the world’s leading video creation to distribution company offering global, end-to-end services and technology. Through unmatched scale, technology and capabilities, Deluxe enables the global market for video content.

“As the demand for higher quality and increased amounts of content continues to rapidly grow, we will now have the ability to eliminate the limitations of a hybrid on-prem tape model by using S3 Glacier Deep Archive to reduce access time and rapidly shift the availability and workability of content sources exclusively on the cloud,” said Andy Shenkler, Chief Product Officer, Deluxe.

“AWS’s S3 Glacier Deep Archive addresses the challenges that have previously existed around the economics and timelines associated with accessing and utilizing large media assets throughout every step of the content creation and distribution process.”

Vodacom is a leading communications company providing a wide range of services, including mobile and fixed voice, messaging, data, financial, Enterprise IT, and converged services to over 73 million customers across the African continent.

“We are a data driven organization and use current and historical information to provide personalized customer offers, improve user retention, and ensure a higher quality of network service through analysis,” said Willie Stegmann, Group Chief Information Officer, at Vodacom.

“We identified archive and backup storage as key candidates to migrate offsite in an attempt to reduce the time we spend managing storage infrastructure. S3 Glacier Deep Archive provides us with near limitless secure and durable capacity at a cost so low that we no longer need to consider deleting critical data. The use of S3 Glacier Deep Archive also means that we can easily meet recovery time objectives of 12 to 48 hours for the identified backup and archive storage sets.”

The Academic Preservation Trust (APTrust) is a consortium of higher education institutions committed to providing both a preservation repository for digital content and collaboratively developed services related to that content.

APTrust helps address one of the great challenges facing research libraries and their parent institutions—preventing the permanent loss of scholarship and cultural records.

“We accept all types of digital content from member institutions—print, audio, video, encrypted, and other types of files,” said Chip German, Program Director, APTrust.

“Our members deposit with us all sorts of data they consider valuable, including data that’s required by funders to be preserved and made accessible for a set period. The copies we store are usually secondary ones in case a disaster damages the primary copy, but that level of assurance comes at a cost that researchers and their institutions often haven’t fully anticipated. Amazon S3 Glacier Deep Archive gives APTrust members a much more affordable option to preserve their data for as long as they desire, and to do so confidently and conveniently.”

Commvault, an AWS Partner Network member, is a recognized global enterprise software leader in the management of data for cloud and on-premises environments.

“Our customers need to be able to move, manage, and use data in a way that promotes business agility and contains costs,” said Karen Falcone, Vice President, Worldwide Cloud and Service Providers, at Commvault.

“With Commvault’s support for AWS, customers get single, comprehensive data management platform with full data protection, backup, recovery, management, and eDiscovery capabilities—all tightly integrated with AWS services. And now, S3 Glacier Deep Archive will allow us to provide the lowest cost storage available in the cloud and have it accessible, if necessary, in the future. For our customers in regulated industries, that can mean petabytes of data going back years. Customers can use S3 Glacier Deep Archive today as an Early Release feature.”

Veritas is an AWS Partner Network Advanced Storage competency partner and the proven industry leader in data protection and software defined storage solutions for over two decades.

“Our customers need to be able to harness the power of their information with solutions designed to serve the world’s most complex and largest heterogeneous environments while accelerating digital transformation, reducing risk, and delivering cost savings,” said Cameron Bahar, SVP & CTO, at Veritas.

“With Veritas solutions supporting AWS, we continue to extend our support for cloud usage models and provide our customers simple and agile solutions to solve complex data management issues for backup/recovery, archiving, primary storage, and disaster recovery use cases. With Amazon S3 Glacier Deep Archive, Veritas will be able to help customers increase their savings even more significantly. Veritas customers can use S3 Glacier Deep Archive Standard tier with the latest NetBackup version as of today.”


from Help Net Security https://ift.tt/2COWMK6

Accenture reports revenues of $10.5 billion

Accenture reported financial results for the second quarter of fiscal 2019, ended Feb. 28, 2019, with revenues of $10.5 billion, an increase of 5 percent in U.S. dollars and 9 percent in local currency over the same period last year.

Diluted earnings per share were $1.73, compared with $1.37 for the second quarter last year, which included a $0.21 charge related to U.S. tax law changes.

Diluted EPS for the second quarter of fiscal 2019 increased 9 percent from adjusted diluted EPS of $1.58 in the same period last year.

Operating income was $1.39 billion, a 7 percent increase over the same period last year, and operating margin was 13.3 percent, an expansion of 20 basis points.

New bookings for the quarter were $11.8 billion, with consulting bookings of $6.7 billion and outsourcing bookings of $5.1 billion.

David Rowland, Accenture’s interim chief executive officer, said, “We delivered outstanding financial results for the second quarter. I am particularly pleased with our record new bookings of $11.8 billion and revenue growth of 9 percent in local currency, which reflect significant market share gains. In addition, we delivered very strong profitability while generating excellent free cash flow.”

“The durability of our performance reflects the power of our highly differentiated growth strategy — from our leadership position in the New, to our rapidly growing business in Intelligent Platform Services, to our relentless focus on leading with innovation. With the successful execution of our strategy, combined with the disciplined management of our business, we are very well-positioned to continue growing ahead of the market and delivering significant value for clients and shareholders.”


from Help Net Security https://ift.tt/2Oz9Ojx

Week in review: Employee cybersecurity essentials, ASUS attack, lessons learned from crypto hacks


Here’s an overview of some of last week’s most interesting news and articles:

Attackers compromised ASUS to deliver backdoored software updates
Unknown attackers have compromised an update server belonging to Taiwanese computer and electronics maker ASUS and used it to push a malicious backdoor on a huge number of customers. A few days after the revelation (by Kaspersky Lab researchers) ASUS confirmed the compromise and released a clean version of Live Update software.

Encrypted attacks growing steadily, cybercriminals are increasingly targeting non-standard ports
In 2018, SonicWall recorded the decline of cryptojacking, but more ransomware, highly targeted phishing, web application attacks and encrypted attacks.

Employee cybersecurity essentials part 1: Passwords and phishing
Your company may have state-of-the-art monitoring and the latest anti-malware and anti-virus programs, but that doesn’t mean you’re not at risk for a breach, or that – as an employee, that you’re not putting your company at risk.

What worries you the most when responding to a cybersecurity incident?
The clock starts ticking immediately following a cybersecurity incident with the first 24 hours vital in terms of incident response.

Lessons learned from the many crypto hacks
The one poignant lesson that crypto investors globally have learned over the years is that despite the immutable, impenetrable nature of the technology behind cryptocurrencies and blockchain, their crypto investments and transactions are not secure.

Apple fixed some interesting bugs in iOS and macOS
In addition to announcing a number of new products and subscription services, Apple has released security updates for iOS, macOS, Safari, tvOS, iTunes, iCloud, and Xcode.

61% of CIOs believe employees leak data maliciously
There is a perception gap between IT leaders and employees over the likelihood of insider breaches. It is a major challenge for businesses: insider data breaches are viewed as frequent and damaging occurrences, of concern to 95% of IT leaders, yet the vectors for those breaches – employees – are either unaware of, or unwilling to admit, their responsibility.

Identify web application vulnerabilities and prioritize fixes with Netsparker
In this Help Net Security podcast, Ferruh Mavituna, CEO at Netsparker, talks about web application security and how Netsparker is helping businesses of any size keep their web applications secure.

When it comes to file sharing, the cloud has very few downsides
Organizations storing data and documents they work on in the cloud is a regular occurrence these days. The cloud offers scalability in terms of storage and cloud services often provide helpful folder- and file-sharing capabilities and content control measures.

Consumers willing to dump apps that collect private data, but can’t tell which are doing so
Two in three consumers are willing to dump data-collecting apps if the information collected is unrelated to the app’s function, or unless they receive real value – such as that derived through email or browsers.

How to build an effective vulnerability management program
The concept of vulnerability management has undergone a number of changes in the last few years. It is no longer simply a synonym for vulnerability assessment, but has grown to include vulnerability prioritization, remediation and reporting.

Weighing the options: The role of cyber insurance in ransomware attacks
When companies become victims of a ransomware event, it may be tempting for them to simply pay the ransom and move on. But for organizations who hold a cyber insurance policy, other factors must be analyzed to determine what comes next.

Cybercriminals are increasingly using encryption to conceal and launch attacks
In this Help Net Security podcast, Deepen Desai, VP Security Research & Operations at Zscaler, talks about the latest Zscaler Cloud Security Insight Report, which focuses on SSL/TLS based threats.

The ransomware attack cost Norsk Hydro $40 million so far
A little over a week after the beginning of the ransomware attack targeting Norsk Hydro, the company has estimated that the costs it incurred because of it have reached 300-350 million Norwegian crowns ($35-41 million).

Cisco botched patches for its RV320/RV325 routers
Cisco RV320 and RV325 WAN VPN routers are still vulnerable to attack through two flaws that Cisco had supposedly patched.

Serverless, shadow APIs and Denial of Wallet attacks
In this Help Net Security podcast, Doug Dooley, Chief Operating Officer at Data Theorem, discusses serverless computing, a new area that both DevOps leaders and enterprise security leaders are having to tackle.

2017 Cisco WebEx flaw increasingly leveraged by attackers, phishing campaigns rise
Network attacks targeting a vulnerability in the Cisco Webex Chrome extension have increased dramatically. In fact, they were the second-most common network attack, according to WatchGuard Technologies latest Internet Security Report for the last quarter of 2018.

Secure workloads without slowing down your DevOps flows
In this Help Net Security podcast recorded at RSA Conference 2019, David Meltzer, CTO at Tripwire, and Lamar Bailey, Senior Director of Security Research at Tripwire, discuss the challenges of securing DevOps.

Third-party cyber risk management is a burden on human and financial resources
Organizations and third parties see their third-party cyber risk management (TPCRM) practices as important but ineffective.

Build-time security: Block risk and security issues from production rings
Build-time security has become a standard part of any security program and continues to grow in popularity with the shift left movement. In its most popular form, it’s a series of checks that take place as code makes its way from a developer’s laptop into production to ensure that the code is free from known vulnerabilities.

New infosec products of the week: March 29, 2019
A rundown of infosec products released last week.


from Help Net Security https://ift.tt/2CKCLUY

Friday Squid Blogging: Restoring the Giant Squid at the Museum of Natural History

It is traveling to Paris.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.


from Schneier on Security https://ift.tt/2uASdyg

NSA-Inspired Vulnerability Found in Huawei Laptops

This is an interesting story of a serious vulnerability in a Huawei driver that Microsoft found. The vulnerability is similar in style to the NSA's DOUBLEPULSAR that was leaked by the Shadow Brokers -- believed to be the Russian government -- and it's obvious that this attack copied that technique.

What is less clear is whether the vulnerability -- which has been fixed -- was put into the Huwei driver accidentally or on purpose.


from Schneier on Security https://ift.tt/2FK9pbb

Serverless, shadow APIs and Denial of Wallet attacks

In this Help Net Security podcast, Doug Dooley, Chief Operating Officer at Data Theorem, discusses serverless computing, a new area that both DevOps leaders and enterprise security leaders are having to tackle.

serverless challenges

Here’s a transcript of the podcast for your convenience.

Hi, my name is Doug Dooley of Data Theorem. We are a leading provider of modern application security and I want to talk a little bit about a new area that both DevOps leaders and enterprise security leaders are having to tackle. The new area is serverless computing.

What is serverless?

We’re going to talk first about what is serverless. Serverless computing is a new application execution model that automates at runtime the orchestration of infrastructure. In other words, when a developer builds a new application, when they build it on top of serverless, there is a capability that essentially automates all of the traditional approaches of having to spin up and scale out a variety of computing, network, storage, databases, etc., all the underlying infrastructure to support and scale up the infrastructure to support the application. Similarly, when the application is no longer in use, scale it all back down.

Advantages and benefits of serverless

The big advantage of what serverless has been delivering for the past four years for applications that have been built on it, are dramatically lower cost because you only pay for when the application is in use, and significantly easier to use because of the skills that you need to orchestrate and automate all of the infrastructure is taken care of by the cloud providers. Specifically, Amazon with Lambda service, Google cloud with Cloud Functions and Microsoft with Azure Functions.

This is a relatively new area in the last four years that has grown in substantial popularity. Just to put a statistic on that, Amazon released data that it took about 10 years for Docker containers to reach about 24% usage by their customer base. Lambda in comparison has reached a similar percentage, 23.5%, in just four years since it was introduced.

Serverless is growing more than twice as fast in popularity of usage among their customer base versus containers. Because of this fast-growing popularity, particularly by developers to take advantage of serverless to make their life easier and to dramatically lower the cost of application development and application execution, it has created some new and interesting challenges for enterprise security.

On the positive side, because there are no traditional servers staying persistent all the time, there is this nice benefit of wiping out, sort of a clean slate on operating systems and compute that support applications. When malware or difficult attacks are happening and staying persistent or even dormant inside of your infrastructure, the positive is with serverless, these things are constantly scaling up and scaling down because ultimately the infrastructure is ephemeral. So, it’s hard for bad applications or malware just to stay hidden in your environment for long periods of time, because it’s all getting cleared out frequently.

Shadow APIs and Denial of Wallet (DoW) challenges

There are some new interesting challenges with this serverless approach. One of them is this concept of Shadow APIs. Because most of these applications are now being built with a microservices architecture, you have these smaller, reusable pieces of software that ultimately support an enterprise application built on serverless.

Most of these microservices are interconnected with one another through a communication via API, typically RESTful APIs. Whether these RESTful APIs are viewed as publicly consumable or private, to be only used to interconnect microservice fabric, either way, once it’s on the public cloud it is inherently accessible and available to any attacker or to any potential malicious software. One of the things that’s starting to happen for the enterprises, they don’t know what they don’t know on the number of APIs that are being published and consumed by these modern applications using serverless.

This is a new challenge from a discovery perspective, to find all of these Shadow APIs that exist in the enterprise environment, and there needs to be new tools and new techniques on how to go about that discovery. That’s one of the new interesting challenges for security when developers are using this new concept of serverless.

The second challenge that is starting to pop up is this new class of attack called Denial of Wallet (DoW). A Denial of Wallet attack is similar to what we know about with Denial of Service. If you have a bad actor or some threat that is going after one of your applications and part of its intent is to bombard it with fictitious requests that busy up your application, what will happen when that application is built on a serverless architecture is that the underlying resources continue to scale up and spin up in order to deal with these increased number of requests.

These requests are intended to potentially take down your service, but because of the nature of the public cloud and services like Lambda and Cloud Functions, they will continue to spin up in order to handle the load. As a result, the cost to the enterprise continues to balloon out of control from a financial perspective to the point where it’s really ultimately hurting the wallet or hurting the bottom line of the company paying that bill for their serverless infrastructure.

Again, we’ve known about DoS for quite a long time, but Denial of Wallet is sort of a new type of attack, a financial attack, that sort of takes advantage of this auto scaling nature of serverless.

These are just two new interesting concepts that enterprise security teams and DevOps teams are now starting to get their arms around from a security perspective when applications are built on serverless, and there are actually several other new challenges.

But we just wanted to highlight and bring attention to the fact that on one hand, serverless has incredible cost advantages and simplicity advantages, and also some interesting security advantages because the application infrastructure ephemerally recreates itself often. But there are these new classes of attacks and these new threats that folks are having to deal with. There are some exciting times happening in security due to serverless, and we need to stay tuned for more innovations coming this way.


from Help Net Security https://ift.tt/2uvw7NR

Thursday, March 28, 2019

New infosec products of the week: March 29, 2019

Guardicore launches freely available public resource for investigating malicious IP addresses and domains

Guardicore Threat Intelligence is a freely available public resource for identifying and investigating malicious IP addresses and domains. With an easy to understand dashboard, it rates top attackers, top attacked ports and top malicious domains, giving security teams the insight they need to research and understand attacks and mitigate risks.

infosec products march 2019

F5’s new delivery model leverages the AWS SaaS Enablement Framework

F5 Networks extends its portfolio with a new delivery model that leverages the AWS SaaS Enablement Framework for its application services. F5 Cloud Services provide high-availability, self-service, and fully managed SaaS solutions that are easily provisioned and configured within minutes on AWS. As enterprise-grade offerings, they are designed to support modern deployment scenarios such as cloud-native applications, microservices, and container-based environments.

infosec products march 2019

New XebiaLabs predictive risk solution for DevOps helps forecast release success or failure

Using a new DevOps Prediction Engine, the XebiaLabs DevOps Platform’s Risk Prediction Module combines machine learning with a proprietary algorithm to give teams a “weather forecast” for their releases before those releases even start to run. The XebiaLabs DevOps Prediction Engine highlights potential bottlenecks before release processes start, so Release Managers, Developers, and DevOps Engineers can take preventative action, adjust timelines, and keep the business in the loop.

infosec products march 2019

ConnectWise launches ConnectWise Identify, a new security assessment tool for MSPs

ConnectWise Identify allows managed service providers (MSPs) to easily assess their own and their customers’ current security posture against a wide variety of malicious cybersecurity threats. The result is an easy-to-understand, customized risk report with remediation options, all from a single pane of glass, that has implications for the entire business, not just the network.

infosec products march 2019

Acuris Cybercheck enables businesses and individuals to monitor against compromised ID data

Acuris Cybercheck is a database that allows businesses and individuals to identify whether or not their information has been compromised by criminals. Complied since 2008, the database includes information traded on criminal websites globally such as identity, personal and financial data. It also allows users to search via APIs against specific records or to continuously monitor data.

infosec products march 2019


from Help Net Security https://ift.tt/2JQcbzy

Enterprises fear disruption to business critical applications, yet don’t prioritize securing them

The majority of organizations (nearly 70 percent) do not prioritize the protection of the applications that their business depend on – such as ERP and CRM systems – any differently than how low-value data, applications or services are secured.

securing business critical applications

Even the slightest downtime affecting business critical applications would be massively disruptive, with 61 percent agreeing that the impact would be severe, according to the CyberArk survey conducted among 1,450 business and IT decision makers, primarily from Western European economies.

Breaches affecting applications that are the lifeblood of business can result in punitive costs, with a 2018 report estimating the average cost of an attack on an ERP system at $5.5 million USD.

The threat actors that enterprises face are formidable – organized crime was behind 50 percent of all breaches in 2018, with attacks using established tactics like privileges abuse to achieve their aims.

Despite the fact that more than half (56 percent) of organizations have experienced data loss, integrity issues or service disruptions affecting business critical applications in the previous two years, the survey found a large majority (72 percent) of respondents are confident that their organization can effectively stop all data security attacks or breaches at the perimeter.

This brings to light a remarkable disconnect between where security strategy is focused and the business value of what is most important to the organization. An attacker targeting administrative privileges for these applications could cause significant disruption and could even halt business operations.

The survey also found that 74 percent of organizations indicated they have moved (or will move within two years) business critical applications to the cloud. A risk-prioritized approach to protecting these assets is necessary for this transition to be managed successfully.

Further industry data shows that, globally, 69 percent of organizations are migrating data for popular ERP applications to the cloud.

securing business critical applications

“From banking systems and R&D to customer service and supply chain, all businesses in all verticals run on critical applications. Accessing and disrupting these applications is a primary target for attackers due to their day-to-day operational importance and the wealth of information that resides in them – whether they are on-premises or in the cloud,” said David Higgins, EMEA technical director at CyberArk.

“CISOs must take a prioritized, risk-based approach that applies the most rigorous protection to these applications, securing in particular privileged access to them and assuring that, regardless of what attacks penetrate the perimeter, they continue to run uncompromised.”


from Help Net Security https://ift.tt/2UihnjO

Lessons learned from the many crypto hacks


The one poignant lesson that crypto investors globally have learned over the years is that despite the immutable, impenetrable nature of the technology behind cryptocurrencies and blockchain, their crypto investments and transactions are not secure.

2018, for example, witnessed some of the largest crypto exchange hacks globally. Not to mention, the alarming volatility in the crypto market that continues to make headlines each day. According to the Cryptocurrency Anti-Money Laundering Report published by Cipher Trace, a blockchain security firm, 2018 witnessed a loss of over $1billion in cryptocurrencies, with the hack of the Japanese crypto exchange, Coincheck, accounting for more than half of that loss. Other notable breaches include Italy’s BitGrail and South Korea’s Coinrail. In addition, $9 million is stolen from crypto wallets every day. 2018’s crypto losses alone were more than three times those seen in 2017.

The hacking trend seems to continue in 2019. Cryptopia, a New Zealand-based cryptocurrency exchange was hacked halfway through the very first month of this year. This hack was followed by the data breach suffered by the Israel-based exchange, Coinmama. Although no cryptocurrencies were stolen in the latter, it did result in the leak of close to 450,000 email addresses and passwords. These types of data leaks can have far wider repercussions if not dealt with immediately.

While these hacks and breaches have had global regulators sit up and take notice, it has also clearly resulted in a loss of investor confidence in crypto. However, just because hackers are targeting the crypto world, it doesn’t mean we should steer away from crypto. Instead, investors need to question and understand the reasons behind this sudden rise in crypto cybercrime.

Hackers’ favorite methods to steal crypto

Now, if the past hacking incidents have taught us anything, it is that hackers will continue to follow the growing pool of crypto funds. They will continue to develop newer tools every day to get away with their victims’ money. In short, they will follow the money.

When it comes to targeting crypto, hackers seem to have several go-to methods that they turn to more often than others. These include, phishing and using malware droppers to infect a users’ device with a keylogger, or, buffer manipulator. By injecting scripts such as the JavaScript malware into active web sessions, hackers are silently executing bank transfers as soon as a user logs in to their cryptocurrency account. Another relatively new and sophisticated method hackers are increasingly turning to is SIM-swapping, where a victim’s phone number gets transferred to a thief’s SIM card, thereby allowing them to change passwords and access the victim’s crypto accounts. What makes it worse is that once the cryptocurrencies are stolen, they are gone for good. There’s no way to trace the transactions and there’s no one that can be held accountable.

The reality is that hackers are tracking our habits. Our increased dependence on devices like mobile and desktops to surf the web, carry out crypto transactions and store crypto currencies divulge to hackers that if they find a way to penetrate our devices, access to our crypto assets is no challenge at all.

Since cryptowallets and crypto exchanges are only as strong as the devices used to host and interact with them, we must be vigilant in finding ways to secure our devices to curb the number of crypto attacks.

The need for a proactive solution

Every four seconds, hackers release a new string of malware, and by the time there is a solution to wipe away that malware, a new one is injected into the crypto space. Clearly, 30-year-old antivirus solutions will not protect us from the prevalent malicious threats to crypto. So, what can we do to ensure that we don’t lose the fight against hackers?

The only way to deal with crypto attacks, is to truly understand the ways in which the devices we use for crypto storage and transactions can and have been compromised. Once we have a good understanding of the various methodologies used for the different devices, we can then begin to implement proactive measures to secure these devices from future hacking.

To protect their devices against previously successful hacking techniques and finally have the peace of mind knowing that their crypto transactions are safe, crypto investors only have to employ simple measures. For example, by installing security features such as keystroke encryption, anti-clickjacking and anti-screen scraping on their devices, investors can essentially prevent malware from spying on and copying or gathering any critical information from their devices. Another way to keep hackers at bay is to use stronger password protection with real-time transaction verification.

While there is no sure-fire way to eliminate hackers, acknowledging that hacking attempts are inevitable, and taking proactive measures to keep out the bad actors, can go a long way in securing one’s crypto assets.


from Help Net Security https://ift.tt/2uyr4fq

CIOs admit certificate-related outages routinely impact critical business applications and services

Certificate-related outages harm the reliability and availability of vital network systems and services while also being extremely difficult to diagnose and remediate. Unfortunately, the vast majority of businesses routinely suffer from these events.

certificate-related outages harm

In fact, according to the study released by Venafi, almost two-thirds of organizations (60 percent) experienced certificate-related outages that impacted critical business applications or services within the last year. In addition, 74 percent faced similar events within the last 24 months.

Certificate-related outages are likely to become more complicated, common and costly in the future. The study also found that:

  • Eighty-five percent believe the increasing complexity and interdependence of IT systems will make outages even more painful in the future.
  • Nearly 80 percent estimate certificate use in their organizations will grow by 25 percent or more in the next five years, with over half anticipating minimum growth rates of more than 50 percent.
  • While 50 percent of CIOs are concerned that certificate outages will have an impact on customer experience, 45 percent are more concerned about the time and resources they consume.

“Recently, a machine identity-related outage impacted 32 million cellular customers in the U.K., and estimates suggest this could have cost the company over $100 million,” said Kevin Bocek, vice president, security strategy and threat intelligence at Venafi.

“Ultimately, companies must get control of all of their certificates; otherwise, it’s simply a matter of time until one expires and causes a debilitating outage. CIOs need greater visibility, intelligence and automation of the entire life cycle of all certificates to do this.”

While humans rely on usernames and passwords to identify themselves and gain authorized access to applications and services, machines use digital certificates to serve as machine identities in order to communicate securely with other machines and gain authorized access to applications and services.

This year, organizations will spend over $10 billion to protect and manage passwords, but they will spend almost nothing to protect and manage machine identities. Most organizations do not have a clear understanding of how many machine identities are in use, which devices are using them, and when they will expire. This lack of comprehensive visibility and intelligence leads to outages.

Bocek added: “Since certificates control authentication and communication between machines, it is important not to let them expire unexpectedly. And because the symptoms of a machine identity-related outage mimic many other hardware and software failures, diagnosing them is notoriously time-consuming and difficult.”

Over 550 chief information officers (CIOs) from the U.S., U.K., France, Germany and Australia participated in the study.


from Help Net Security https://ift.tt/2HXSi7e

Status of AI implementation at automotive organizations

Just 10 percent of major automotive companies are implementing artificial intelligence (AI) projects at scale, with many falling short of an opportunity that could increase operating profit by up to 16 percent.

OPIS

Fewer automotive companies are implementing AI than in 2017, despite the cost, quality and productivity advantages.

The “Accelerating Automotive’s AI Transformation: How driving AI enterprise-wide can turbo-charge organizational value” study from the Capgemini Research Institute surveyed 500 executives from large automotive companies in eight countries, building on a comparable study from 2017, to establish recent trends in AI investment and deployment.

The research highlighted the following potential reasons for the modest progress in relation to AI implementation:

  • The roadblocks to technology transformation are still significant, such as legacy IT systems, accuracy and data concerns, and lack of skills.
  • The hype and high expectations that initially came with AI may have turned into a more measured and pragmatic view as companies are confronted with the reality of implementation.

Scaling of AI has seen a slow growth

Since 2017, the number of automotive companies that have successfully scaled AI implementation has increased only marginally (from seven percent to 10 percent). However, the increase in companies not using AI at all was more significant (from 26 percent to 39 percent).

According to the report, just 26 percent of companies are now piloting AI projects (down from 41 percent in 2017). This is maybe due to companies finding it harder to realize a desired return on investment. The results also reveal a significant regional disparity, with 25 percent of U.S. firms delivering AI at scale, compared to nine percent in China.

Automotive organizations can drive significant reward from scaled AI

The modest progress in implementing AI projects at scale represents a major missed opportunity for the industry.

Modelling in the report, based on one typical top 50 original equipment manufacturer (OEM), estimates that delivering AI at scale could achieve increases in operating profit ranging from five percent (or $232m) based on conservative estimates, to 16 percent (or $764m) in an optimistic scenario.

“With AI-empowered visual inspection we have sensibly reduced the ratio of false positives with respect to the previous systems,” said Demetrio Aiello, head of the AI & Robotics Labs at Continental. “I am very confident that if we can deploy AI to its fullest potential it would have an impact on performance equivalent to almost doubling our capacity today.”

AI is seen more as a job-creator than a job-replacer

The report showed that the industry has become more positive about AI’s job-creation potential – 100 percent of executives say that AI is creating new job roles, up from 84 percent in 2017.

Where AI is being deployed, it is achieving results

The survey found a consistent story of AI delivering benefits across every automotive business function. On average, it delivered a 16 percent increase in productivity across Research and Development (R&D), operational efficiency improvements of 15 percent in the supply chain and 16 percent in manufacturing/operations, reduced direct costs of 14 percent in customer experience and 17 percent in IT, and reduced time to market by 15 percent in R&D and 13 percent in marketing/sales.

Additionally, a number of successful AI projects are identified and detailed in the research report. One example is Continental generating 5,000 miles of vehicle test data an hour through an AI-powered simulation, compared to 6,500 miles a month it was getting through physical test driving.

Others include:

  • Volkswagen accurately modeling vehicle sales across 250 auto models in 120 countries using machine learning.
  • Mercedes-Benz testing an AI-recognition system for parcel delivery that can reduce vehicle loading time by 15 percent.

Markus Winkler, Executive Vice President, Global Head of Automotive at Capgemini concludes, “These findings show that the progress of AI in the automotive industry has hit a speedbump. Some companies are enjoying considerable success, but others have struggled to focus on the most effective use cases, vehicle manufacturers need to start seeing AI not as a standalone opportunity, but as a strategic capability required to shape the future which they must organize investment, talent and governance around.”

He continues, “As this research shows, AI can deliver a significant dividend for every automotive business, but only if it is implemented at scale. For AI to succeed, organizations will need to invest in the right skills, achieve the requisite quality of data, and have a management structure that provides both direction and executive support.”

OPIS

To deliver at scale, companies must invest, upskill and create infrastructure

The report also examined the behaviors of the companies in the survey who have had the most success implementing AI at scale (‘Scale Champions’). It found they had typically,

  • invested much more in AI (more than $200m a year for 86 percent of Champions),
  • focused hiring and training efforts on AI skills (32 percent said hiring was relevant to their AI strategy, versus 14 percent of others; 25 percent said they proactively upskilled and re-skilled current employees, compared to eight percent of others); and
  • created a clear governance structure to prioritize and promote AI, with measures including a central steering to govern AI investment, and a cross-functional team of tech, business and operations experts.

from Help Net Security https://ift.tt/2Wy8d05

AWS launches Concurrency Scaling, a new Amazon Redshift feature

Amazon Web Services, an Amazon.com company, announced the general availability of Concurrency Scaling, a new Amazon Redshift feature that automatically adds and removes capacity to handle unpredictable demand from thousands of concurrent users.

Concurrency Scaling comes at no cost to almost all customers, and every customer – even those with the spikiest workloads – will immediately see greater processing capacity at lower costs with more predictable spend. AWS also shared that Amazon Redshift has more than 10,000 customers, making it the most popular cloud data warehouse.

The experience gained from serving a large and diverse customer base, along with the lessons learned from processing over two exabytes of customer data every day, has enabled Amazon Redshift to accelerate feature development and continuously optimize performance, delivering 10x faster average query times to customers over the last two years.

Concurrency Scaling is the latest innovation of more than 200 features and enhancements delivered to customers during the past two years, including Elastic Resize, which adds more nodes to a cluster in minutes, and Short Query Acceleration, which uses machine learning algorithms to speed up interactive queries.

“Amazon Redshift provides ten times the performance of traditional on premises data warehouses at one tenth the cost, and it has the most customers and the largest production deployments of any cloud data warehouse provider,” said Raju Gulabani, Vice President, Database, Analytics, and Machine Learning at AWS.

“Customers choose Amazon Redshift for its performance, scalability, and cost-effectiveness. We’ve had thousands of customers for several years, and as the service has grown, we’ve been lucky enough to get continuous feedback on what we can do to improve the service. This feedback has led to the more than 200 features and capabilities added to the service during the past two years, including Elastic Resize, Short Query Acceleration, and now Concurrency Scaling.”

“Concurrency Scaling adds to Amazon Redshift’s scalability and flexibility by transparently adding and removing capacity to handle unpredictable workloads from thousands of concurrent users. The features and enhancements released over the past two years mean that customers are seeing a 10x improvement in query times on average, and with Concurrency Scaling, the service transparently scales to handle unpredictable demand, making it ideal for all customer data warehousing workloads.”

In addition to the performance gains, customers choose Amazon Redshift because it provides the flexibility to extend their data warehouse to analyze exabytes of data across their Amazon S3 data lake without unnecessary data movement or duplication.

Customers can store their data in their Amazon S3 data lake in popular open formats such as Parquet, ORC, and JSON. They can then use Amazon Redshift Spectrum to break down data silos to answer the most complex analytical questions and uncover new business insights across their data warehouses and data lake.

Pfizer, McDonald’s, Hilton Hotels Worldwide, Yelp, Intuit, Redfin, NTT DOCOMO, Equinox Fitness, and Edmunds are just a few of the more than ten thousand customers and partners that benefit from Amazon Redshift’s new features and enhancements.


from Help Net Security https://ift.tt/2I13Eav

Sphere Identity launches new platform and app to remove friction from KYC process

Sphere Identity, a global, digital identity system, has launched its two-part business platform and mobile application, which will allow businesses to sign up customers in a secure, compliant, and global way, while also giving users simple control of their data.

Sphere Identity is one of the first self-sovereign identity platforms built for digital commerce, backed by distributed storage technologies, providing one-click consumer onboarding that in turn helps businesses navigate GDPR and other regulatory compliance risks.

From a business perspective, companies can easily integrate Sphere Identity into their KYC process, providing a more efficient alternative to online forms and streamlining their customer experience offering in a way that is adaptable to the regulatory and compliance landscape.

For consumers, the mobile application provides individuals with easy access to personal documentation, such as passports and driver’s licenses, saving them the hassle of re-entering the same information numerous times, while also giving them complete control of their personal data.

Katherine Noall, CEO, Sphere Identity, said: “Customer dropoff, revenue loss, data harvesting, and security breaches have become characteristic of this digital age, for both businesses and consumers.

In the wake of hacks, scandals, and regulatory responses, we are seeing an increasing rejection of this reality, with more businesses searching for solutions that uncomplicate KYC processes, make sign-up simple, and keep identity safe.

In a similar vein, customers are realizing that their online identities are multiple, exposed, and easily exploited, and are increasingly frustrated at having to enter the same information over and over again, with little warning as to how that information will be used by third parties.

By adopting digital identity procedures in e-commerce transactions, businesses can introduce trust and automation, which has spurred a distinct market for self-sovereign identity solutions.”

With 22% of customers currently abandoning the sign-up process due to the burdensome process of filling in online forms, Sphere Identity aims to combat the dropoff by providing consumers with a consolidated digital identity that can be easily accessed and applied as required.

The Sphere Identity platform is built with the principles of privacy and security by design, making use of distributed storage technology, ensuring that customer data is decentralized and fully encrypted, reducing the chances of hackers stealing data from one central source.


from Help Net Security https://ift.tt/2Yzt0Sz

Proxy raises $13.6M in funding and launches a smartphone-powered universal identity signal

Proxy, a startup dedicated to empowering all people with a universal identity signal, announced that it has raised $13.6M in Series A funding in a round led by Kleiner Perkins with participation from WeWork, Y Combinator, Coatue Management and leading industry executives.

This brings Proxy’s total funding to $16.6M to date. With this announcement, Proxy comes out of stealth to launch Proxy Signal, a smartphone-powered universal identity signal that brings frictionless access and personalized experiences into today’s rapidly changing workplace.

Today people must use a multitude of keys, cards, badges, apps, and passwords to gain access to the buildings, devices, and services they frequently use.

Proxy is changing the way people interact with the things they use every day by giving them one secure and privacy-driven universal identity signal, called their Proxy Signal, that is emitted from their smartphone, works everywhere, and goes beyond access to enable responsive environments.

“There are 20 billion connected devices and 7.5 billion humans on earth. We believe Proxy Signals are the future of how everyone will securely authenticate and interact with everything in the physical world,” said Proxy CEO and co-founder, Denis Mars.

“To achieve this, we created an entirely new identity protocol over Bluetooth Low Energy (BLE) that represents humans in the physical world via their smartphones.”

Proxy’s initial product line provides people with mobile access throughout commercial buildings and the workplace — for instance, secured doors, elevators and turnstiles. Proxy is already in use by more than 50 companies – driven by an urgent demand for eliminating keycards, badges, visitor passes, ID cards and QR codes.

A recent survey of office workers found that about half must take out their keycard or fob at least five times a day at the office, and about one in five has lost their keycard at least once in the last year – creating security concerns and significant administrative overhead for security teams that Proxy resolves.

“The big idea behind Proxy – that people can use a secure, universally accepted digital identity signal on their smartphone to access important things in the physical world – is incredibly timely and exciting, and it’s what attracted us to invest in the company,” said Wen Hsieh, General Partner at Kleiner Perkins.

“Proxy’s initial physical access market is large and untapped, and the applicability of the technology to other verticals is obvious. There is tremendous potential here, and my partners and I are excited to back this team.”

“We developed this technology to create the kind of frictionless world we ourselves want to live in: one where technology empowers us and then gets out of the way,” said Mars.

“We’ve seen what happens in the online world where technology is designed to track us and own our attention, so we think it’s important to offer a privacy-driven way to own and empower your identity as the digital and physical worlds merge.”

Earlier this year, Proxy partnered with WeWork to implement Proxy Signal at the company’s headquarters in New York and San Francisco, helping provide WeWork employees with seamless access to their workspace.

“At WeWork, we use technology to connect our member communities and our employees around the world,” said Jack Krawczyk, VP of Product Management at WeWork. “Proxy has developed an innovative product that enables us to explore new ways to more seamlessly connect our members with their workspaces, and we look forward to supporting them in their journey.”

Another early Proxy customer is Dropbox, which wanted a better way to enable frictionless access at its San Francisco headquarters.

“With Proxy, we can give our employees, contractors, and visitors a seamless smartphone-enabled access experience they love, while actually bolstering security,” said Christopher Bauer, Physical Security Systems Architect at Dropbox.

How Proxy works

Proxy employs its patented tokenized access engine and universal identity protocol built on top of Bluetooth Low Energy (BLE) to enable any smartphone or wearable to passively and securely emit a signal that represents the user. People create their Proxy identity within the Proxy ID app and then activate their Proxy Signal.

Users own their identity and personal data and have complete control of the devices that can interact with their Proxy Signal. Users must opt-in and explicitly grant permissions to any device, product or organization they connect with, ensuring the user always has full visibility and control over how their signal is being used in real time.

Proxy’s frictionless workplace solution is available immediately. The Proxy ID app can be downloaded on the iOS App Store and Google Play, and the full line of Proxy mobile readers for buildings and workplaces can be viewed and purchased on proxy.com.

Proxy also publishes a comprehensive set of APIs and SDKs, enabling all products and devices to join the Proxy network to serve people in new and exciting ways.

The company is continuously expanding its building and workplace offerings, enabling responsive environments with context-aware capabilities such as automatic visitor check-ins, smart meeting room booking, responsive desks and workstations, and seamless conference activation, while also developing innovative, identity-empowered solutions for other markets.


from Help Net Security https://ift.tt/2OufFGA

Sysdig unveils new solution to support AWS App Mesh

Sysdig, a cloud-native intelligence company, announced Sysdig support for Amazon Web Services (AWS) App Mesh, a solution that makes it easy to monitor and control microservices running on AWS.

AWS App Mesh standardizes how microservices communicate, giving users end-to-end visibility and helping to ensure high-availability for their applications.

With Sysdig supporting AWS App Mesh natively out-of-the-box, AWS customers will receive additional visibility into how their microservices, running on Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Container Service for Kubernetes (Amazon EKS), are performing, giving them additional insight into the security profile and overall health of their service mesh.

Modern applications are often composed of multiple microservices, each performing a specific function. As the number of microservices grows within an application, it can become difficult to ensure visibility, availability, and scalability inside these environments.

AWS App Mesh uses the open source Envoy proxy, making it compatible with a wide range of AWS Partner Network (APN) and open source tools for monitoring microservices.

“AWS App Mesh allows customers to have a single view and a single point of control for all the communications between microservices in their application without changing their code,” said Deepak Singh, director of compute services at Amazon Web Services, Inc.

“With Sysdig, AWS App Mesh users will be able to monitor the performance of their service mesh as well as view performance and security metrics across their infrastructure, giving our customers added control of their containerized environments.”

Visibility and security of AWS App Mesh and Envoy with Sysdig

With this new support, Sysdig enhances AWS App Mesh monitoring with the ability to automatically scrape metrics from the Envoy proxy’s Prometheus endpoint. ContainerVision, Sysdig’s patented data collection technology, allows enterprises to securely collect, alert on, and visualize the metrics from Envoy.

Once collected, Sysdig correlates the data with the vast amount of metrics and events data Sysdig collects and enriches from across the entire container infrastructure, including Kubernetes.

“Service mesh technology is becoming more critical for enterprise customers who are building complex services with containers,” said Loris Degioanni, chief technology officer and founder of Sysdig. “We’re happy to be one of the first to work with AWS to bring the visibility and security capabilities of the Sysdig Cloud-Native Intelligence Platform to AWS App Mesh users.”

Sysdig’s work to support AWS App Mesh shows Sysdig’s commitment to stay in step with the cloud-native market and broaden its support for technologies that are critical for microservices deployments.

This support reaffirms AWS and Sysdig’s rapidly accelerating relationship. In the last six months, Sysdig has announced joining the AWS Marketplace, achieving APN Advanced Technology Partner status, participating in the launch the AWS Marketplace for Containers, and achieving AWS Container Competency status.


from Help Net Security https://ift.tt/2FxRsLC