Saturday's Best Deals: Ring Video Doorbell, Breda Watches, Dyson V7 Animal, and More

A refurb Ring Video Doorbell 2, discounted Breda watches, and a discounted Dyson V7 Animal lead off Saturday’s best deals from around the web.

Read more...


from Lifehacker https://ift.tt/30QVwjE

Sophisticated iPhone hacking went unnoticed for over two years


Imagine that an iPhone could be turned into a surveillance tool capable of sending hackers a record of its owner’s entire digital life, including their location in real time, all their emails, chats, contacts, photos and saved passwords.

A showstopper of a compromise, and yet according to Google Project Zero researcher Ian Beer this is exactly what’s been happening to thousands of iPhone users, for more than two years.

It’s a revelation that had some commentators cracking open the hyperbole emergency glass, so let’s cover the important facts of the story before jumping to any alarming conclusions.

The story starts with a discovery by Google’s Threat Analysis Group (TAG):

… [we] discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

The first hint that something was up came on 7 February when Apple released an urgent out-of-band update that took iOS to version 12.1.4.

At the time, the main flaw patched by this appeared to be the FaceTime app call snooping bug (CVE-2019-6223). However, further down the same advisory two other flaws (CVE-2019-7287 and CVE-2019-7286) through which attackers might be able to gain elevated and/or kernel privileges were briefly described.

Kernel panic

These are generic descriptions but in a blog this week Beer has offered the more alarming backstory to their discovery and its potential threat.

Several months of analysis later and it seems these flaws were part of a haul of fourteen vulnerabilities exploited in variations combinations.

Affecting iOS 10.x, 11.x, and 12.x, seven related to the Safari browser, five the iOS kernel, plus two sandbox escapes. Most of these had been patched over time but the two reported to Apple above were zero days, hence the company’s rush to get 12.1.4 out only days after Google told them about the issue.

Google isolated five unique exploit chains – campaigns run over time using different combinations of flaws – one of which dated back to late 2016.

The exploit chains were used against visitors to a small group of websites hacked as part of a ‘watering hole’ campaign (where sites frequented by target individuals are hacked to serve exploits).

Writes Beer:

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

Although this group of campaigns has been disrupted, Beer thinks there are “almost certainly others that are yet to be seen.”

What this means

Victims’ iPhones would have had malware installed in the form of a powerful monitoring implant capable of stealing chat messages (including WhatsApp, Telegram and iMessage), photos, tracking users’ locations in real time, and even accessing the Keychain password store.

If you set out to design a compromise of a mobile device, it’d be hard to imagine a more complete one than this, excepting that this campaign was eventually detected.

Two caveats to hold on to for encouragement – for attackers to take control of iPhones they still had to tempt victims to specific websites. The malware installed on the phones via the exploit chains stopped working when users rebooted their iPhones, in which case the attackers would have to start infection over again.

Beer’s write-up hints that the attack may be the work of a nation state group trying to gather intel on specific groups of people for political reasons. We can’t verify if that’s true but if it is, it wouldn’t be the first.

Even if the average iPhone user wasn’t the target of the campaigns described by Google, that’s little comfort. We don’t know what other campaigns the group behind them may have been running or who else knew about these exploits.

However, one major strength of Apple’s platform is that updates should now be offered automatically, a big difference from Android where updates can take months to turn up on some handsets.

iOS has been secure against the exploit chains used in these attacks since version 12.1.4. To check what version you’re using, go to Settings > General > Software Update. This will tell you what version of iOS you’re using and if a newer version is available.


from Naked Security https://ift.tt/2Lery2i

Google discovers websites exploiting iPhones, pushing spying implants en masse

Unidentified attackers have been compromising websites for nearly three years, equipping them with exploits that would hack visiting iPhones without any user interaction and deliver a stealthy implant capable of collecting much of the sensitive information found on users’ iOS-powered devices.

iPhone mass hacking

Indiscriminate compromise

“Earlier this year Google’s Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day,” shared Ian Beer, a researcher with Google’s Project Zero.

“There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.”

Subsequent research revealed the attackers’ use of five unique iPhone exploit chains, using 14 vulnerabilities covering almost every version from iOS 10 through to the latest version of iOS 12, meaning that the attackers made “a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.”

iPhone mass hacking

Of the 14 vulnerabilities, seven affected Safari (i.e., WebKit, its browser engine), five the kernel and two allowed sandbox escapes.

“Initial analysis indicated that at least one of the privilege escalation chains was still 0-day and unpatched at the time of discovery (CVE-2019-7287 & CVE-2019-7286). We reported these issues to Apple with a 7-day deadline on 1 Feb 2019, which resulted in the out-of-band release of iOS 12.1.4 on 7 Feb 2019,” Beer noted.

“For many of the exploits it is unclear whether they were originally exploited as 0day or as 1day after a fix had already shipped. It is also unknown how the attackers obtained knowledge of the vulnerabilities in the first place,” Google Project Zero researcher Samuel Groß explained.

“Generally they could have discovered the vulnerabilities themselves or used public exploits released after a fix had shipped. Furthermore, at least for WebKit, it is often possible to extract details of a vulnerability from the public source code repository before the fix has been shipped to users.”

More details about the spying implant

According to the researchers, the implant is primarily focused on stealing files and uploading live location data, and beacons to and requests commands from a C&C server every 60 seconds.

It can access:

  • Users’ photos, contacts, location data (GPS)
  • The device’s Keychain, which contains credentials, certificates and access tokens (e.g., the Google OAuth token)
  • Container directories containing all unencrypted messages sent and received via popular end-to-end encryption apps and mail apps (including Telegram, Gmail, QQMail, Whatsapp, WeChat, and Apple’s own iMessage app).

iPhone mass hacking

But, interestingly enough, the implant binary does not persist on the device.

“If the phone is rebooted then the implant will not run until the device is re-exploited when the user visits a compromised site again,” Beer noted.

“Given the breadth of information stolen, the attackers may nevertheless be able to maintain persistent access to various accounts and services by using the stolen authentication tokens from the keychain, even after they lose access to the device.”

Who’s behind this?

The researchers didn’t publicly identify the hacked sites (“watering holes”), which is information that could allow us to make an educated guess about the targets and the attackers.

It seems obvious, though, that the attackers have considerable resources at their disposal and, judging by the capabilities of the spying implant, are not financially motivated. In fact, it all points to a long-standing effort supported by a nation-state.

Rendition Infosec founder (and former NSA hacker) Jake Williams told Wired that these campaigns bear many of the hallmarks of a domestic surveillance operation. Still, the fact that the implant uploads the data without HTTPS encryption and to a server whose address is hardcoded in the binary is unusual for such an effort.

“Contrast that with multiple exploit chains and sandbox escapes and it sure sounds like a group with tons of money to buy exploits and little operational experience,” he noted.

The non-disclosure of watering hole sites’ and C&C server’s IP addresses combined with some of the language in Google’s blog posts has spurred online speculation.

All that aside, the most important realization resulting from this discovery is that iPhones are not as secure as generally and widely believed.

“The reality remains that security protections will never eliminate the risk of attack if you’re being targeted,” says Beer, and sometimes this might mean “simply being born in a certain geographic region or being part of a certain ethnic group.”

Users should be conscious of that fact and make risk decisions based on it, he concluded. “Let’s also keep in mind that this was a failure case for the attacker: for this one campaign that we’ve seen, there are almost certainly others that are yet to be seen.”


from Help Net Security https://ift.tt/2HxWObs

Attacking the Intel Secure Enclave

Interesting paper by Michael Schwarz, Samuel Weiser, Daniel Gruss. The upshot is that both Intel and AMD have assumed that trusted enclaves will run only trustworthy code. Of course, that's not true. And there are no security mechanisms that can deal with malicious enclaves, because the designers couldn't imagine that they would be necessary. The results are predictable.

The paper: "Practical Enclave Malware with Intel SGX."

Abstract: Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. For instance, Intel's threat model for SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether this threat model is realistic. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion, but also act on the user's behalf, e.g., sending phishing emails or mounting denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we seek to demystify the enclave malware threat and lay solid ground for future research on and defense against enclave malware.


from Schneier on Security https://ift.tt/2NEP2QV

Google will pay for data abuse reports related to popular Android apps, Chrome extensions

Google is expanding the Google Play Security Reward Program (GPSRP) to include all apps in Google Play with 100 million or more installs, and is launching a new Developer Data Protection Reward Program (DDPRP) and asking for information about data abuse issues in Android apps, OAuth projects, and Chrome extensions.

data abuse android chrome

“The [DDPRP] program aims to reward anyone who can provide verifiably and unambiguous evidence of data abuse, in a similar model as Google’s other vulnerability reward programs. In particular, the program aims to identify situations where user data is being used or sold unexpectedly, or repurposed in an illegitimate way without user consent,” said Google engineers Adam Bacchus, Patrick Mutchler and Sebastian Porst.

“If data abuse is identified related to an app or Chrome extension, that app or extension will accordingly be removed from Google Play or Google Chrome Web Store. In the case of an app developer abusing access to Gmail restricted scopes, their API access will be removed.”

About the Developer Data Protection Reward Program

Reporters can earn as much as $50,000 if the impact of the discovered abuse is substantial.

The types of reported issues that will qualify for a bounty include:

  • Apps falling afoul of Google Play policies (e.g., data collected by an Android app is sold, disclosed or shared by the developer in a manner that violates Google’s and/or the developer’s data handling or privacy policies)
  • Apps violating the Permissions policy
  • (e.g., an app that has SMS permission and shares that data with a third party for advertising purposes)

  • Apps violating the User Data policy (e.g., an app that accesses a user’s inventory of installed apps and doesn’t treat this data as personal or sensitive data subject to the Privacy Policy, Secure Transmission, and Prominent Disclosure requirements).
  • Apps violating the limited use requirements in the API user data policy (e.g., an app providing travel services, using or transferring user data unrelated to travel, or an app transferring user data to affiliates to help develop new products)
  • Chrome extensions violating the Chrome Web Store’s minimum user data privacy requirements
  • Chrome extension’s developer lacking transparency in its handling of user data, including lack of disclosure behind the collection.

In scope are Android apps with over 100 million installs, Chrome extensions with more than 50,000 users, and apps with more than 50,000 users that use restricted API scopes (allow access to Google User Data).

About the Google Play Security Reward Program

GPSRP is after reports about bugs and vulnerabilities in participating apps on Google Play (developers of Android apps must apply to join the program).

All vulnerabilities must always be reported directly to the app developer first. Once they are fixed, the reporter can request a bonus bounty from Google via this program.

Issues in scope are RCE vulnerabilities, vulnerabilities that lead to theft of private data, and vulnerabilities that allow access to protected app components. The most severe issues (RCE) can be rewarded with as much as $20,000.

The increase in scope of GPSRP means that participants can now report flaws in all apps in Google Play with 100 million or more installs directly to Google, even if the app developers don’t have their own vulnerability disclosure or bug bounty program.

“In these scenarios, Google helps responsibly disclose identified vulnerabilities to the affected app developer. This opens the door for security researchers to help hundreds of organizations identify and fix vulnerabilities in their apps,” the engineers explained.

“Vulnerability data from GPSRP helps Google create automated checks that scan all apps available in Google Play for similar vulnerabilities. Affected app developers are notified through the Play Console as part of the App Security Improvement (ASI) program, which provides information on the vulnerability and how to fix it. Over its lifetime, ASI has helped more than 300,000 developers fix more than 1,000,000 apps on Google Play.”


from Help Net Security https://ift.tt/2ZzjGlk

Waterfall Security and Indegy provide visibility, security and control for ICS environments


Waterfall Security Solutions, the global leader in industrial cybersecurity today, and Indegy, a provider of security solutions for industrial control system (ICS) and operational technology (OT) environments, announced a partnership to safely centralize OT and IT security monitoring.

Waterfall Security Solutions produces a family of Unidirectional Gateway technologies and products, enabling safe IT/OT integration, with physical rather than merely software protections for industrial networks.

Indegy combines hybrid, policy-based monitoring and anomaly detection with unique device integrity checks to enable organizations to proactively identify and respond to ICS security threats and prevent disruptions.

The multi-level integration of the Indegy Industrial Cyber Security Suite with Waterfall Unidirectional Security Gateways enables log, sensor and other security data to pass, in a secure fashion, from OT networks to IT systems for analysis, investigation and reporting.

“Operations networks are among the most important networks in industrial enterprises,” said Lior Frenkel, CEO of Waterfall Security Solutions. “Our partnership with Indegy enables such enterprises to safely extend the scope of enterprise security monitoring programs to these important systems and networks.”

“Protecting OT environments from mounting cyber risks requires deep, end-to-end visibility into activity right down to changes made to individual controllers,” said Barak Perelman, CEO of Indegy. “Our partnership with Waterfall Security enables customers to send ICS security intelligence gathered by Indegy to IT monitoring platforms while preventing the reverse flow of traffic to the OT network.”


from Help Net Security https://ift.tt/2LlkEIH

Thursday, August 29, 2019

CISO priorities: Implementing security from the get-go

Dr. David Brumley, CEO of ForAllSecure, a Carnegie Mellon computer science professor (on leave), and part of the team that won the DARPA Cyber Grand Challenge, was, at one time, a dishwasher and a line chef. That was before going back to get his high school diploma via correspondence courses and attending the University of Northern Colorado (UNCO), where he graduated with a B.A. in Mathematics while also working as a system administrator.

CISO priorities

After graduation, he got his first security job: Chief Security Officer at Stanford University. Five years later, he attained a master’s degree in Computer Science from the university. After another five years, he gained a PhD in Computer Science from Carnegie Mellon University (CMU), and began his teaching career there and started a PhD program.

“Working as a CSO gave me thousands of hours of hands-on experience in the field and this shaped my research. In my role as a professor at CMU, I learned a lot about shaping research problems, getting a team of bright minds together to work on them, and keeping the team happy, engaged, and funded,” he told Help Net Security.

Among the problems he wanted to find an answer to was: “How can we automatically check the world’s software for exploitable bugs?”

“I’ve spent 15 years working on technology to help identify vulnerable software. In 2014 at CMU, I was working with two amazing students – Thanassis Avgerinos and Alex Rebert – on this problem, and we had a breakthrough result: we developed a system dubbed Mayhem, which allows users to check off-the-shelf Linux apps for unknown bugs and vulnerabilities,” he shared.

Academically, their work was really well received, but the infosec industry was not yet convinced. So, together the three founded ForAllSecure and entered the DARPA Cyber Grand Challenge (CGC), the first computer security tournament designed to test the “wits” of machines, not human experts.

The objective was to see if the automated identification and repair of security vulnerabilities in software is possible, and Mayhem ended up winning the challenge.

Automation is the only real solution

That was three years ago. Since then, they’ve been working to make the Mayhem DARPA research prototype into a product anyone can use, and have had the opportunity to interact with hundreds of cyber professionals to see how it can help protect the world’s software. They’ve engaged with the Defense Innovation Unit (DIU) – a new unit that brings radically new tech into the DoD protect systems.

“We’re learning a lot about customers, products, and how the market takes on new technologies. It’s not easy – Mayhem and similar tools are a new breed. Also, during the Cyber Grand Challenge, we didn’t have to worry about how to get apps inside the system for the check. In real life, we do, and we’re working on making it easy,” he added.

For Dr. Brumley, there’s not a shadow of a doubt that the security industry has to turn to technologies that don’t need humans to find security faults in software.

“Humans cannot react quick enough to the pace of current threats. Every day attackers probe our networks, find new vulnerabilities, and come up with ingenious ways to circumvent security. We know we can’t out-scale attackers manpower wise; no organization can hire more security experts than there are potential attackers,” he opined.

“Technology scales and works faster than any human can, but that doesn’t mean that there is no role for humans in this battle. What I’m saying is that we should automate as much as possible, leaving humans for what they do best: creative work, thinking of new problems, finding new solutions. And once they do, we should try to find a way to automate those as well.”

If you’re running it, you’re responsible for its security

Organizations must change the way they implement security and change the way they look at it, he also said.

“When deciding which new tech to deploy on your IT environment, involve security in that decision. When you’re creating new applications, create an application security team who is integrated with your developers,” he advised.

Organizations should also stop asking themselves whether they are secure (there’s no such thing as absolutely secure) and start asking how quickly they can identify a new problem and react and whether they can move faster than attackers.

“Forty years of research has shown it’s near impossible to solve the ‘make it secure — period’ problem. I think we can solve the ‘how to move faster’ problem,” Dr. Brumley noted.

Thirdly, organizations need to start considering and thinking about all the risks they inherit.

“When you use open source, you’re inheriting a risk. When you use third-party software you’ve not checked yourself, you’re inheriting a risk,” he explained. “I’ve run into many companies who say to me when I point out a huge gaping hole: ‘well, we didn’t develop or create that.’ That doesn’t matter! If you’re running it, you’re responsible for it.”

And, finally, organizations must invest in their people. Yes, it’s hard and yes, it can be expensive, but people are often thrust into a security role with very little formal training or education, he noted, and they simply have to refine their skills.

“Personally, I’ve found two tricks. First, teach your security people the basics of coding if they don’t already know. The goal isn’t to turn them into developers; it’s to make sure they know how software and computers work deep down,” he advised.

“The second trick I’ve used is encouraging security teams to enter ‘Capture the Flag’ competitions. A hacking CTF is a closed world where security can practice and hone their skills, and ultimately provide a rubric to see how they are doing compared to others. In short, if you play a CTF and get beat, you probably have some skills you can improve on.”

(At CMU, he co-founded and advised a very successful competitive hacking group named the Plaid Parliament of Pwning (PPP). They’ve also created a free online game called PicoCTF to help high school kids – as well as others – to learn how to hack.)

Please, no more FUD

We all known that Fear, Uncertainty, and Doubt (FUD) sells well, but Dr. Brumley would like to see companies start building trust.

“I think smart organizations actually do think in terms of trust. For example, Google provides a free service and incentives to check open source for security flaws with OSS Fuzz. Why do they do this? One answer is that Google wants people to trust their products like Google Chrome. They know if there is a security flaw – even from open source components included in Chrome – people will trust Chrome itself less,” he pointed out.

“When you start thinking of security as a mechanism to build trust, it stops being a cost and becomes added value.”


from Help Net Security https://ift.tt/2MLIAbn

Cybersecurity in the age of the remote workforce


With the advent of cloud services and the proliferation of high end mobile devices (think iOS devices and Android phones), the workforce is moving inexorably to a mobile one where managers and employees are no longer tied to the office.

What initially started as a movement to the mobile phone/tablet work style has spilled over into full remote computing solutions. Users want to be able to take any device and work from anywhere with no loss of functionality.

From the business perspective, this can send chills up the backs of CIOs and CISOs as this full freedom work solution creates numerous security and compliance challenges. How does a firm protect its data? How does it minimize data leakage? How do managers ensure a lost device doesn’t constitute a data breach? How do they poke holes in the perimeter so their users can work without reducing the effectiveness of their security solution?

Thankfully there are many options available to the security-minded folks in the room. Users can be set up for a productive experience while maintaining the security integrity for the enterprise.

Here are some of the basics:

Ensure all mobile devices that connect to the corporate network do so utilizing Transport Layer Security (TLS), and that the devices themselves are encrypted, password protected and managed by Corporate IT’s Mobile Device Management (MDM) solution.

Make sure that the company’s enrollment requires multi-factor authentication, and allows corporate IT the ability to manage corporate applications and data on the devices. Your organization should look at leveraging features from MDM to restrict the ability of corporate data from leaving the secure “container” of corporate managed applications, preventing leakage of corporate data.

Home computers/laptops, cell phones and tablets should be thought of as one category – mobile devices. Companies need to be investing in the technologies that treat all of those devices mentioned above from the BYOD perspective. Technologies can be used to manage not just iOS and Android but Windows personal computers. And when users are outside the office, organizations can combine with items like Conditional Access to govern how and when your users can access that data.

These systems add a great extra variable into the conversation as to whether a user can access the data. In the past, the conversation may have been “do they have access?” Now it can be, “do they have access when they use a mobile device?” Plus, some data needs to be classified differently. Maybe all of your marketing materials should be accessible remotely, but employee benefits info? It’s okay to have different policies and configurations around the different classifications of data.

Next, align your remote access methods and policies to meet your business requirements. Don’t allow access to documents and data on devices that don’t meet your minimum security requirements or are not managed at the corporate level. Tools such as MDM providers, Endpoint Analysis Scans and Conditional Access from Microsoft can help firms meet this requirement.

Look to embrace cloud native services for a better user experience. Frequently firms will work to try and restrict end users, forcing them into a poor user experience trying to access legacy based systems through a VPN that’s cumbersome for the user. Not only will this drive support costs up and productivity down, it will stop being effective.

VPN solutions create security holes in and of themselves. Embracing cloud native services that allow someone to work seamlessly from any device anywhere will provide an improved user experience and drive adoption up. Also, these solutions will give the enterprise the ability to configure corporate policies to govern how the data can and will be used. A win for both sides.

The remote workforce trend is only going to become more prevalent and companies need to ensure that they have best practices in place to address security concerns while providing users with a positive remote experience.


from Help Net Security https://ift.tt/2PoQedU

Fileless attacks designed to disguise malicious activity up 265%

Trend Micro published its roundup report for the first half of 2019, revealing a surge in fileless attacks designed to disguise malicious activity. Detections of this threat alone were up 265% compared to the first half of 2018.

fileless attacks surge

Fileless events were 18% more than in the whole of 2018.

“Sophistication and stealth are the name of the cybersecurity game today as corporate technology and criminal attacks become more connected and smarter,” said Jon Clay, director of global threat communications for Trend Micro. “From attackers, we saw intentional, targeted, and crafty attacks that stealthily take advantage of people, processes and technology. However, on the business side, digital transformation and cloud migrations are expanding and evolving the corporate attack surface. To navigate this evolution, businesses need a technology partner that can combine human expertise with advanced security technologies to better detect, correlate, respond to, and remediate threats.”

Along with the growth in fileless threats in the first half of the year, attackers are increasingly deploying threats that aren’t visible to traditional security filters, as they can be executed in a system’s memory, reside in the registry, or abuse legitimate tools. Exploit kits have also made a comeback, with a 136% increase compared to the same time in 2018.

Cryptomining malware remained the most detected threat in the first half of 2019, with attackers increasingly deploying these threats on servers and in cloud environments.

The number of routers involved in possible inbound attacks jumped 64% compared to the first half of 2018, with more Mirai variants searching for exposed devices.

Additionally, digital extortion schemes soared by 319% from the second half of 2018, which aligns with previous projections. Business email compromise (BEC) remains a major threat, with detections jumping 52% compared to the past six months. Ransomware-related files, emails and URLs also grew 77% over the same period.

fileless attacks surge

Detections of ransomware-related threats increased significantly.

In total, Trend Micro blocked more than 26.8 billion threats in the first half of 2019, over 6 billion more than the same period last year. Of note, 91% of these threats entered the corporate network via email. Mitigating these advanced threats requires smart defense-in-depth that can correlate data from across gateways, networks, servers and endpoints to best identify and stop attacks.


from Help Net Security https://ift.tt/2NEGw4t

S2 Ep6: Instagram phishing, jailbreaking iPhones and social media hoaxes – Naked Security Podcast


Episode 6 of the Naked Security Podcast is now live!

This week, host Anna Brading is joined by Mark Stockley and Paul Ducklin to discuss jailbreaking iPhones [2’50”], sophisticated Instagram phishing [14’02”] and the latest social media hoax [28’23”].

As always, we love answering your cybersecurity questions on the show – simply comment below or ask us on social media.

Listen now and tell us what you think!

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Audio player above not working? Download MP3, listen on Soundcloud or on Apple Podcasts, or access via Spotify.


from Naked Security https://ift.tt/2ZvqYWZ

AI Emotion-Detection Arms Race

Voice systems are increasingly using AI techniques to determine emotion. A new paper describes an AI-based countermeasure to mask emotion in spoken words.

Their method for masking emotion involves collecting speech, analyzing it, and extracting emotional features from the raw signal. Next, an AI program trains on this signal and replaces the emotional indicators in speech, flattening them. Finally, a voice synthesizer re-generates the normalized speech using the AIs outputs, which gets sent to the cloud. The researchers say that this method reduced emotional identification by 96 percent in an experiment, although speech recognition accuracy decreased, with a word error rate of 35 percent.

Academic paper.


from Schneier on Security https://ift.tt/2LlMCEi

Microsoft may still be violating privacy rules, says Dutch regulator


After the privacy hell-hole that was Windows 10 circa 2017-ish, you’re doing better, the Dutch Data Protection Authority (DPA) told Microsoft on Tuesday, but you still aren’t legally kosher, privacy-wise.

A very quick recap: Users howled. Regulators scowled. Microsoft tweaked in 2017. The DPA investigated those tweaks. The upshot of its investigation: the DPA has asked the Irish privacy regulator – the Irish Data Protection Commission, DPC – to re-investigate the privacy of Windows users.

What a long, strange privacy trip it’s been

A recap with more flesh on its bones: in 2015, Microsoft released Windows 10. From the get-go, France’s privacy watchdog – the National Data Protection Commission (CNIL) – had concerns about the operating system’s processing of personal data through telemetry.

Window 10’s release had sparked a storm of controversy over privacy: Concerns rose over the Wi-Fi password sharing feature, Microsoft’s plans to keep people from running counterfeit software, the inability to opt out of security updates, weekly dossiers sent to parents on their kids’ online activity, and the fact that Windows 10 by default was sharing a lot of users’ personal information – contacts, calendar details, text and touch input, location data, and more – with Microsoft’s servers.

After conducting tests, CNIL determined that there were plenty of reasons to think that Microsoft wasn’t compliant with the French Data Protection Act. In July 2016, it gave Microsoft three months to fix Windows 10 security and privacy.

After the CNIL’s warning and a slap from the Electronic Frontier Foundation (EFF), Microsoft made a series of changes to tackle the privacy concerns around Windows 10.

In January 2017, Microsoft launched a web-based privacy dashboard that let users pick and choose what information gets sent to the company – be it tracking data, speech recognition, diagnostics or advertising IDs that apps glue on to your system for targeted marketing.

OK, so Microsoft made some changes. Was it enough? No.

In October 2017, the DPA said that after looking into the privacy of users of Windows Home and Pro, it had concluded that Microsoft was still illegally processing personal data through telemetry. Specifically, it found that

Microsoft continuously collects technical performance and user data. This includes which apps are installed and, if the user has not changed the default settings, how often apps are used, as well as data on web surfing behaviour. These data are called ‘telemetry data’. Microsoft takes continuous pictures – as it were – of the behaviour of Windows users and sends them to itself.

The Dutch privacy watchdog ordered Microsoft to make more changes to Windows, which the company did in the April 2018 update. The DPA outlined what was expected from that update, to pull everything up to speed with the impending General Data Protection Regulation (GDPR):

Microsoft will ensure that users are better informed about the data it collects and what this data is used for. In addition, users can take active, straightforward steps to control their own privacy settings. In light of the new EU privacy law (the General Data Protection Regulation), which comes into force on 25 May 2018, the Dutch DPA has insisted that the update be implemented across the entire EU. Microsoft has agreed to do this, and the Dutch DPA will monitor implementation.

…all of which leads us up to now. So, how did that April 2018 update do?

Better, but maybe still in violation

The DPA said on Tuesday that the changes made in the April 2018 Windows update have led to “an actual improvement” in data privacy. But at the same time, it appears that… “Microsoft also collects other data from remote users.” Upshot:

As a result, Microsoft may still violate the privacy rules.

Therefore, the DPA says, it’s time for the lead privacy regulator in Europe – that would be the Irish DPC – to investigate further concerns about how Windows collects user data.

User beware

The Dutch data privacy regulator is also advising Windows users to “pay close attention to privacy settings when installing and using this software.”

Microsoft is permitted to process personal data if consent has been given in the correct way. We’ve found that Microsoft collect diagnostic and non-diagnostic data. We’d like to know if it is necessary to collect the non-diagnostic data and if users are well informed about this.

Does Microsoft collect more data than they need to (think about data minimalization as a base principle of the GDPR)? Those questions can only be answered after further examination.

The Irish DPC confirmed to TechCrunch that it received the Dutch regulator’s concerns last month. The publication quoted a DPC spokeswoman:

Since then the DPC has been liaising with the Dutch DPA to further this matter. The DPC has had preliminary engagement with Microsoft and, with the assistance of the Dutch authority, we will shortly be engaging further with Microsoft to seek substantive responses on the concerns raised.

And this is what Microsoft had to say on the matter:

The Dutch data protection authority has in the past brought data protection concerns to our attention, which related to the consumer versions of Windows 10, Windows 10 Home and Pro. We will work with the Irish Data Protection Commission to learn about any further questions or concerns it may have, and to address any further questions and concerns as quickly as possible.

Microsoft is committed to protecting our customers’ privacy and putting them in control of their information. Over recent years, in close coordination with the Dutch data protection authority, we have introduced a number of new privacy features to provide clear privacy choices and easy-to-use tools for our individual and small business users of Windows 10. We welcome the opportunity to improve even more the tools and choices we offer to these end users.

Are you so over ads while onboarding?

As one reader noted when we wrote up the 2017 privacy dashboard introduction, they were seeing ads every time they logged on to Windows 10. TechCrunch notes that during the onboarding process for Windows 10, Microsoft makes multiple requests to process user data for various reasons, including to serve ads to users.

As Naked Security’s Paul Ducklin responded at the time, he never saw ads on Windows 10, including at login, in spite of installing and reinstalling the operating system “any number of times” in the test rig he was using to get malware screenshots to use in his articles. But then, he knows where to look for the right options, he said:

When I do my installs I pick ‘custom’ and not ‘express settings’ at the relevant setup configuration prompt, and then turn all the options off using the toggles. I assume this helps reduce the tat that I see compared to what some other people are seeing.

TechCrunch also noted that Windows 10 uses its digital assistant, Cortana, to provide a running commentary on settings screens, including nudges to agree to the company’s T&Cs… If you want to run Windows, that is. From TechCrunch:

‘If you don’t agree, y’know, no Windows!’ the human-sounding robot says at one point.

Is that nudging one of the DPA’s concerns? It’s not clear yet. Time will tell, so tune in to next month’s/year’s episode, as this long-running privacy-regulator wrestling match continues. We’ll let you know when we do!


from Naked Security https://ift.tt/2LdpJCR

Wednesday, August 28, 2019

Knowing what’s on your hybrid-IT environment is fundamental to security

In this Help Net Security podcast recorded at Black Hat USA 2019, Shiva Mandalam, VP of Products, Visibility and Control at Qualys, talks about the importance of visibility.

Whether on-prem (devices and applications), mobile, endpoints, clouds, containers, OT and IoT – Qualys sensors continuously discover your IT assets providing 100% real-time visibility of your global hybrid-IT environment.

visibility and control

Here’s a transcript of the podcast for your convenience.

Good afternoon everybody. My name is Shiva Mandalam and I’m a VP of Products for Visibility and Control solutions here at Qualys. I wanted to talk to you about what has changed in the visibility and control of the need for controlled solutions — how they need to really change — and talk to you about some of the trends which are happening, and also what Qualys is actually doing in this area.

With the fragmentation of IT and also the fact that an average enterprise uses at least 5+ security tools — in some cases it’s actually 10, in some cases 20 — we see that has actually led to poor visibility, or the visibility which kind of once existed in the enterprise is no longer there.

We are also seeing the raise of IoT devices and how the number of devices coming into the enterprise is accelerating. So, I’ve talked to multiple CISOs across the enterprises and virtually everybody’s asking the same question: “How do I get to a 100% visibility, real-time visibility of my environment across all of these different heterogeneous infrastructure and architecture?”

And there is another problem actually, with regards to visibility is that an average enterprise uses 5, 6, 7, 8 tools from security, and all of these are really collecting information about the assets. Unfortunately none of this is standard. So, when a threat detector or somebody in the security team is actually taking a look at all this data and trying to stitch them all together to coordinate all them together, it is very manual, laborious and time consuming today. Something has to be done actually about visibility, and that is the problem which we are trying to tackle from a Qualys perspective.

visibility and control

The IT Asset Inventory solution, which we have introduced recently from Qualys, really addresses a lot of these challenges from a customer perspective. The IT Asset Inventory solution can work with both the managed devices as well as the unmanaged devices. It can also really look at actually the devices as well as the software, as well as the applications, etc. Everything in one place cross your on-premise, endpoints, mobile, cloud, containers, and IT and IoT, OT environment as well.

So, it is kind of a comprehensive visibility solution. We feel like the rest of the industry is not really going to be able to deliver on the solution like we did, because you need a different and unique architecture. From a Qualys perspective, to deliver on the solution we built it on the cloud-based architecture which is our platform, and we also support different models through which we can consume data.

We offer a collection of agentless and agent-based, as well as API-based, solutions to really collect the data from wherever — your managed devices or unmanaged devices — and then really bring in all this telemetry to the Qualys’ cloud platform.

The cloud platform is actually where things are normalized and categorized. This is another big differentiation in terms of how we think about Qualys’ solution. The rest of the industry is thinking about asset as a unidimensional view of “here’s the asset, here’s my hostname, here’s my MAC address, here’s my IP address.” The challenge is that, if you think about an organization and the asset aspect of it, you not only have those raw data attributes, you also have essentially a manufacturer which is actually bringing in a Windows operating system, for example, as different versions of, actually, the Windows. Let’s say a particular product — a server product — has different versions of Windows 2008, Windows 2012, Windows 2015 for example, and each of these have different releases like an R1, R2, R3, and each one of these have different patches as well.

The aspect of really thinking about asset as in “here’s my IP address or a MAC address” is no longer valid. You really need to be able to understand the asset from a multi-dimensional view and then normalize and organize and categorize all this information, depending on where you belong.

Why is it important? If you think about what Amazon did to shopping was that they created actually a store, and they created a catalog which really made it very, very easy for people to kind of go and search for products. The other people who did this at the time could not do this. So, with the Qualys’ IT Asset Inventory solution, because it’s normalized and categorized, you can very quickly search for this solution, anything you want in a matter of seconds.

visibility and control

The other aspect of the platform, which really supports this capability, is the ability to really take all the data from the sensors which we talked about, and then bringing all that and index all that data through our Elastic search clusters so that that information is available to you at your fingertips. Beyond the aspect of really being able to recognize the different type of devices, the other thing you need to solve from a security perspective is to really provide remediation capabilities.

Understanding the aspect of an asset and also understanding the different vulnerabilities of an asset, understanding the different compliance or non-compliance of an asset, you want to be able to take an action associated with really being able to segment an asset or quarantine an asset, because some examples of why you would want to do this is that they have exhibited a malicious behavior, or that they have actually downloaded a piece of software which they should not be doing, that they have some sort of threat indication or compromise for the devices which can put an organization at risk.

How do you really bring in this information of context and really start to drive towards remediation capabilities? Our visibility and control solutions allow you to do both, not only understanding the assets but also really you have a way to remediate the challenges.

We recently announced that this Global IT Asset Inventory solution is something which is the cornerstone for doing proper security. From an offer perspective, we’ve announced a free IT Asset Inventory solution and that solution is something which can work in any environment and is available for free for an unlimited number of devices.

There’s nothing actually in the marketplace which allows you to address the problem head-on like me have done. Take advantage of that, we definitely invite you to go to qualys.com/inventory to get yourselves familiar with the Asset Inventory solution and also try and actually download, start a trial, get the free app, because it pretty much costs you nothing. All the data for you is indexed as well less actually categorized and everything’s available for you at your fingertips.


from Help Net Security https://ift.tt/2LsnFaj

What can be done about the rising click interception threat?

Ad networks’ increasingly successful efforts to detect bot-based ad click fraud has forced attackers to focus more on intercepting and redirecting legitimate users’ clicks.

click interception threat

How widespread is the practice?

A group of researchers from Microsoft Research and several Chinese, Korean and U.S. universities has created a browser-based analysis framework called Observer and analyzed click related behaviors on the Alexa top 250K websites.

They discovered 437 third-party scripts intercepting user clicks on 613 websites, which receive around 43 million visits every day, and found that attackers are using three different techniques to intercept user clicks:

  • Interception by hyperlinks (script creates new or modifies existing hyperlinks)
  • Interception by event handlers (script adds navigation event handlers to different elements of the web page)
  • Interception by visual deception (script creates elements that mimic those already present on the site or inserts visible or invisible overlays).

click interception threat

“We revealed that some websites collude with third-party scripts to hijack user clicks for monetization. In particular, our analysis demonstrated that more than 36% of the 3,251 unique click interception URLs were related to online advertising, which is the primary monetization approach on the Web,” the researchers shared.

“Besides monetization, we find that click interception can lead a user to visit malicious contents. In particular, we were di- rected to some fake anti-virus (AV) software and drive-by download pages when we manually examined some of the click interception URLs.”

Also, the attackers are occasionally trying to make their click interception efforts less noticeable, by limiting the rate at which they intercept the clicks (e.g., the interception happens only the first time users visit a page).

Possible threat mitigations

Click interception has become an emerging threat to web users, the researchers noted, and offered several of possible mitigations.

For example: sites could show the provenance information for each hyperlink and click. These messages should be unforgeable and tamper-proof, and would be displayed when the user hovers the mouse over a link, over an element or when the user performs a click.

But this mitigation will require users to make security decisions, and we all know that’s not the best option: security fatigue is real.

“Alternatively, we can let the browser automatically enforce integrity policies for hyperlinks and click event handlers,” they said.

“For example, an integrity policy can specify that all first-party hyperlinks shall not be modifiable by third-party JavaScript code. One may further specify that third- party scripts are not allowed to control frame navigations, although listening for user click is still permitted. Enforcing all such policies would effectively prevent click-interception by hyperlinks and event handlers. However, it might also break the functionalities of some third-party components.”


from Help Net Security https://ift.tt/324Xm0i

New ransomware grows 118% as cybercriminals adopt fresh tactics and code innovations

McAfee Labs saw an average of 504 new threats per minute in Q1 2019, and a resurgence of ransomware along with changes in campaign execution and code. More than 2.2 billion stolen account credentials were made available on the cybercriminal underground over the course of the quarter. Sixty-eight percent of targeted attacks utilized spear-phishing for initial access, 77% relied upon user actions for campaign execution.

new ransomware

“The impact of these threats is very real,” said Raj Samani, McAfee fellow and chief scientist. “It’s important to recognize that the numbers, highlighting increases or decreases of certain types of attacks, only tell a fraction of the story. Every infection is another business dealing with outages, or a consumer facing major fraud. We must not forget for every cyberattack, there is a human cost.”

Ransomware resurgence features new campaign tactics

McAfee Advanced Threat Research (ATR) observed innovations in ransomware campaigns, with shifts in initial access vectors, campaign management and technical innovations in the code.

While spear phishing remained popular, ransomware attacks increasingly targeted exposed remote access points, such as Remote Desktop Protocol (RDP); these credentials can be cracked through a brute-force attack or bought on the cybercriminal underground. RDP credentials can be used to gain admin privileges, granting full rights to distribute and execute malware on corporate networks.

McAfee researchers also observed actors behind ransomware attacks using anonymous email services to manage their campaigns versus the traditional approach of setting up command-and-control (C2) servers. Authorities and private partners often hunt for C2 servers to obtain decryption keys and create evasion tools. Thus, the use of email services is perceived by threat actors to be a more anonymous method of conducting criminal business.

The most active ransomware families of the quarter appeared to be Dharma (also known as Crysis), GandCrab and Ryuk. Other notable ransomware families of the quarter include Anatova, which was exposed by McAfee Advanced Threat Research before it had the opportunity to spread broadly, and Scarab, a persistent and prevalent ransomware family with regularly discovered new variants. Overall, new ransomware samples increased 118%.

“After a periodic decrease in new families and developments at the end of 2018, the first quarter of 2019 was game on again for ransomware, with code innovations and a new, much more targeted approach,” said Christiaan Beek, McAfee lead scientist and senior principal engineer. “Paying ransoms supports cybercriminal businesses and perpetuates attacks. There are other options available to victims of ransomware. Decryption tools and campaign information are available through tools such as the No More Ransom project.”

new ransomware

Q1 2019 threats activity

Attack vectors. Malware led disclosed attack vectors, followed by account hijacking and targeted attacks.

Cryptomining. New coin mining malware increased 29%. McAfee ATR observed CookieMiner malware targeting Apple users, attempting to obtain bitcoin wallets credentials. As a byproduct, the malware also gained access to passwords and browsing data. Total coin mining malware samples grew 414% over the past four quarters.

Fileless malware. New JavaScript malware declined 13%, while total malware grew 62% over the past four quarters. New PowerShell malware increased 460% due to the use of downloader scripts. Total malware grew 76% over the past four quarters.

IoT. Cybercriminals continued to leverage lax security in IoT devices. New malware samples increased 10%; total IoT malware grew 154% over the past four quarters.

Malware overall. New malware samples increased by 35%. New Mac OS malware samples declined by 33%.

Mobile malware. New mobile malware samples decreased 15%, total malware grew 29% over the past four quarters.

Security incidents. McAfee Labs counted 412 publicly disclosed security incidents, an increase of 20% from Q4. Thirty-two percent of all publicly disclosed security incidents took place in the Americas, followed by 13% in Europe and 13% in Asia-Pacific.

Regional targets. Disclosed incidents targeting the Asia-Pacific region increased 126%, Americas declined nearly 3% and Europe decreased nearly 2%.

Vertical industry activity. Disclosed incidents impacting individuals spiked 78%, education sector increased 50%, healthcare increased 18%, public sector decreased 10%, and financial sector increased 89%.

Targeted attacks. McAfee identified a high number of campaigns that effectively minimized the data reconnaissance required to successfully execute attacks. Actors primarily focused on large organizations in the Government/Administration sector, followed by Finance, Chemical, Defense, and Education sectors. Initial access was gained by spearphishing in 68% of attacks and 77% relied upon specific user actions for attack execution.

Underground. More than 2.2 billion stolen account credentials were made available on the cybercriminal underground over the course of the quarter. The largest dark market, Dream Market, announced its plan to close, citing a large number of DDoS attacks. Law enforcement successfully seized and closed operations of xDedic, one of the largest RDP shops reportedly selling access to approximately 70,000 hacked machines.


from Help Net Security https://ift.tt/2Zuv2CL

SOCs still overwhelmed by alert overload, struggle with false-positives

Security Operations Center (SOC) analysts continue to face an overwhelming number of alerts each day that are taking longer to investigate, leading five times as many SOC analysts this year to believe their primary job responsibility is simply to “reduce the time it takes to investigate alerts”.

SOC alert overload

The most striking finding is the direct toll the alert overload problem is having on SOC analysts with more than 8 out of 10 reporting that their SOC had experienced at least 10% up to more than 50% analyst churn in the past year.

CRITICALSTART surveyed SOC professionals across enterprises, Managed Security Services Providers (MSSP) and Managed Detection & Response (MDR) providers to evaluate the state of incident response within SOCs from a variety of perspectives, including alert volume and management, business models, customer communications as well as SOC analyst training and turnover.

Alert overload

70% of respondents investigate 10+ alerts each day (up from 45% last year) while 78% state that it takes 10+ minutes to investigate each alert (up from 64% last year). In addition, false-positives remain a struggle, with nearly half of respondents reporting a false-positive rate of 50% or higher, almost identical to last year.

Response to alert overload & main job responsibility

With the onslaught of alerts, 38% of respondents say their SOC either tries to hire more analysts or turn off high-volume alerting features deemed too noisy, both up significantly from last year. The number of respondents that feel their main job responsibility is to analyse and remediate security threats has dropped dramatically from 70% down to 41% as analysts increasingly believe their role is to reduce alert investigation time or the volume of alerts.

Customer transparency & communications

A clear majority of respondents (57%) report that MSSPs and MDRs offer limited to no transparency for customers into investigations or underlying data. And in the age of the mobile enterprise, email is still king for customer communications – 73% of respondents report interacting with customers via email, followed by 47% via a desktop portal.

Annual training

Nearly half of respondents say they get 20 or fewer hours of training per year, a surprise given today’s dynamic threat environment.

SOC analyst turnover

In the past year, 80% of respondents report SOC turnover of more than 10% of analysts, with nearly half reporting 10-25% turnover.

SOC alert overload

“The research reflects what we are seeing in the industry – as SOCs get overwhelmed with alerts, they begin to ignore low to medium priority alerts, turn off or tune out noisy security applications, and try to hire more bodies in a futile attempt to keep up,” said Rob Davis, CEO at CRITICALSTART. “Combine that stressful work environment with no training and it becomes clear why SOC analyst churn rates are so high, which only results in enterprises being more exposed to risk and security threats.”


from Help Net Security https://ift.tt/327VJz5

Product showcase: Stellar Repair for Exchange

Recovering from a corrupt Microsoft Exchange Server database or restoring a mailbox from an old Exchange database can be very tricky and, depending on the damage, an impossible task. The technical aspect of recovery requires a lot of effort, not to mention the fact that Exchange admins need to deal with frustrated users at the same time.

When an Exchange Server is down you can’t receive or send emails and your business may be stuck. Repairing an Exchange Server using native applications, tools or PowerShell commands can be lengthy and, depending on the case, an unsuccessful process.

Stellar Repair for Exchange addresses this disaster by providing the ability to attach one or more Exchange EDB files to the application and allowing you to browse them. The software will let you export from a corrupted Exchange EDB file, or an old Exchange EDB file of an Exchange Server which doesn’t exist anymore. The application supports Exchange from version 5.5 to the latest version – Exchange 2019.

Key features

  • Access multiple and large EDB files
  • Export data to PST or other formats
  • Granular search and recovery
  • Advanced Filter Option
  • Export from EDB directly to a live Exchange Server and Office 365
  • Mailbox Creation on live Exchange Server
  • Reduces administration effort by 80%

Access multiple and large EDB files

Stellar Repair for Exchange lets you attach and scan multiple corrupt EDB files, as well as mount databases. If you have two or more EDB files to recover from, you can attach them all at once and export accordingly. Although it’s a small application, it can handle a number of EDB files at the same time.

Stellar Repair for Exchange

Attach several EDB files

The application will scan the EDB file where Quick Scan is selected by default. Depending on the damage, you can run an Extensive Scan. From a test, a Quick Scan on a corrupt EDB file of around 11GB took less than a minute to complete.

Export data to PST or other formats

You can export mailboxes/ folders to PST, MSG, EML, HTML, RTF and PDF file formats.

Granular search and recovery

You can make granular restores by clicking on the Home tab and Find message. This will open a number of criteria to search with: body, attachment, to/ from/ cc, subject, date range, importance and read/unread.

Stellar Repair for Exchange

Granular search with a good search filter

Advanced filter option

With Stellar Repair for Exchange, you can apply an additional filter by clicking the Apply Filter button. Here you can exclude Junk and Deleted emails, specify a date range, and also exclude any emails coming from one or multiple addresses.

Stellar Repair for Exchange

Once the operation is completed, you’ll see emails populating the mailbox.

Export from EDB directly to a live Exchange Server and Office 365

Stellar Repair for Exchange has a neat feature when exporting multiple mailboxes to Exchange Server. The process is simple. Select the mailboxes you need to export including the folders or any data you wish to move, click the Save button, tick Live Exchange Server and click Next.

Stellar Repair for Exchange

Enter the server name of the destination, the administrator email, the respective account and select the Exchange Server version. As you can see below, there is an Auto Map tick box. This feature comes in handy if you already have users and the application will automatically match the mailboxes you are exporting with the mailboxes you have on your Exchange Server.

Stellar Repair for Exchange

After this is done, you will be presented with the screen that includes matching results. If there is no match, the mailboxes will be marked in red. From the application you can either match it with another mailbox or you can set it up to automatically create the mailboxes for you. So, if you had a disaster where you only had one Mailbox Store, the application will export from the EDB, import into Exchange Server and create the mailboxes for you.

Stellar Repair for Exchange

Once you are happy with the selection simply click the Export button and let the application import your data with no downtime and your mailboxes mounted. For Office 365, you must enter the Global Admin credentials. The rest of the options are all the same as in Exchange Server.

Mailbox creation on live Exchange Server

With Create Mailbox you can create a new mailbox, search or enable existing mailboxes. In order to create a mailbox, you click on Create New and enter a username. If you want to change the Mailbox Store, you can change it from the drop-down.

Stellar Repair for Exchange

If you have users in Active Directory that are not mailbox-enabled, like for example after a bulk import, select Enable Existing and it will automatically enable the mailbox for you.

Reduce administration efforts by 80%

When in the middle of a crisis on your hands, with an Exchange Mailbox Database that won’t mount and a corrupted EDB file, using native tools to fix the issue will be time intensive.

Even when performing dial tone recovery in Exchange, you’ll still experience downtime while restoring from a previous backup. Apart from this process taking plenty of time, you might end up with missing emails since you are restoring from a previous backup. During this time you’ll deal with frustrated users and angry bosses, while the business might stop receiving and sending emails.

Consider also situations where you’re restoring an old EDB file, an EDB file from an older version of Exchange, or you might just want to restore one mailbox or a single email. This will require considerable administration effort.

You can deal with all of the above by downloading a free trial of Stellar Repair for Exchange.


from Help Net Security https://ift.tt/2NCmCqJ

The Myth of Consumer-Grade Security

The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that's not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.

In his keynote address at the International Conference on Cybersecurity, Attorney General William Barr argued that companies should weaken encryption systems to gain access to consumer devices for criminal investigations. Barr repeated a common fallacy about a difference between military-grade encryption and consumer encryption: "After all, we are not talking about protecting the nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications."

The thing is, that distinction between military and consumer products largely doesn't exist. All of those "consumer products" Barr wants access to are used by government officials -- heads of state, legislators, judges, military commanders and everyone else -- worldwide. They're used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They're critical to national security as well as personal security.

This wasn't true during much of the Cold War. Before the Internet revolution, military-grade electronics were different from consumer-grade. Military contracts drove innovation in many areas, and those sectors got the cool new stuff first. That started to change in the 1980s, when consumer electronics started to become the place where innovation happened. The military responded by creating a category of military hardware called COTS: commercial off-the-shelf technology. More consumer products became approved for military applications. Today, pretty much everything that doesn't have to be hardened for battle is COTS and is the exact same product purchased by consumers. And a lot of battle-hardened technologies are the same computer hardware and software products as the commercial items, but in sturdier packaging.

Through the mid-1990s, there was a difference between military-grade encryption and consumer-grade encryption. Laws regulated encryption as a munition and limited what could legally be exported only to key lengths that were easily breakable. That changed with the rise of Internet commerce, because the needs of commercial applications more closely mirrored the needs of the military. Today, the predominant encryption algorithm for commercial applications -- Advanced Encryption Standard (AES) -- is approved by the National Security Agency (NSA) to secure information up to the level of Top Secret. The Department of Defense's classified analogs of the Internet­ -- Secret Internet Protocol Router Network (SIPRNet), Joint Worldwide Intelligence Communications System (JWICS) and probably others whose names aren't yet public -- use the same Internet protocols, software, and hardware that the rest of the world does, albeit with additional physical controls. And the NSA routinely assists in securing business and consumer systems, including helping Google defend itself from Chinese hackers in 2010.

Yes, there are some military applications that are different. The US nuclear system Barr mentions is one such example -- and it uses ancient computers and 8-inch floppy drives. But for pretty much everything that doesn't see active combat, it's modern laptops, iPhones, the same Internet everyone else uses, and the same cloud services.

This is also true for corporate applications. Corporations rarely use customized encryption to protect their operations. They also use the same types of computers, networks, and cloud services that the government and consumers use. Customized security is both more expensive because it is unique, and less secure because it's nonstandard and untested.

During the Cold War, the NSA had the dual mission of attacking Soviet computers and communications systems and defending domestic counterparts. It was possible to do both simultaneously only because the two systems were different at every level. Today, the entire world uses Internet protocols; iPhones and Android phones; and iMessage, WhatsApp and Signal to secure their chats. Consumer-grade encryption is the same as military-grade encryption, and consumer security is the same as national security.

Barr can't weaken consumer systems without also weakening commercial, government, and military systems. There's one world, one network, and one answer. As a matter of policy, the nation has to decide which takes precedence: offense or defense. If security is deliberately weakened, it will be weakened for everybody. And if security is strengthened, it is strengthened for everybody. It's time to accept the fact that these systems are too critical to society to weaken. Everyone will be more secure with stronger encryption, even if it means the bad guys get to use that encryption as well.

This essay previously appeared on Lawfare.com.


from Schneier on Security https://ift.tt/32eksBT

Imperva discloses security incident affecting Cloud WAF customers

Imperva, the well-known California-based web application security company, has announced that it has suffered a “security incident” involving its Cloud Web Application Firewall (WAF) product, formerly known as Incapsula.

imperva incident

What happened?

The announcement is very light on details and (perhaps intentionally) vaguely worded, but these are the currently known facts:

  • On August 20, 2019, a third party notified Imperva of data exposure impacting some of their customers
  • Imperva’s initial investigation discovered that parts of its Incapsula customer database were exposed, including email addresses, hashed and salted passwords, API keys and customer-provided SSL certificates of a “subset” of Incapsula customers (up until September 15, 2017)
  • The investigation is ongoing, they’ve called in outside forensic experts, notified the appropriate global regulatory agencies, and have begun informing impacted customers and advising them on what to do.

The company chose not to share for now:

  • Who was the reporting third party
  • Whether this was a data leak (e.g., a misconfigured cloud backup of the database) or whether their own networks and systems have been breached
  • Why they didn’t spot the leak/breach themselves
  • Whether the compromised data was discovered being sold or actively misused
  • The approximate number of affected customers or
  • When the breach actually happened.

It’s, of course, possible that they don’t know the answers to some of these questions yet.

The company has advised all Cloud WAF customers to change their user account passwords for it, to implement Single Sign-On (SSO), enable two-factor authentication, generate and upload a new SSL certificate, and reset API keys.

Imperva has also decided to implement forced password rotations and 90-day expirations in their Cloud WAF product.

The company is owned by private equity firm Thoma Bravo, which acquired it in 2019.


from Help Net Security https://ift.tt/2HvTKMY

Tuesday, August 27, 2019

What the education industry must do to protect itself from cyber attacks


Data breaches show no signs of slowing down and companies across many industry verticals fall victim to what now seems to be a regular occurrence.

Most attention around data breaches is on the commercial side, with Capital One being the recent high-profile breach, compromising the personal information of more than 100 million people. However, the education sector is proving to also be an attractive target.

This summer made it evident that K-12 school districts, higher education, and even commercial companies working with educational institutions are at risk. Notably, the state of Louisiana declared a state of emergency following an attack that disabled computers at three school districts. And it’s not just a problem in Louisiana — schools nationwide are being targeted by hackers.

On August 2, the K-12 Cybersecurity Resource Center’s K-12 Cyber Incident Map reported its 533rd publicly disclosed cyber incident, which means the number of data breaches against K-12 school districts in 2019 has already surpassed 2018’s total. With four months still to go until the end of the year and the 2019-2020 school year beginning, school districts must take appropriate measures to protect themselves from the next attack.

Each year, more schools make the transition to the cloud and security falls further behind. The adoption of cloud technology in schools means that not only must security teams have the resources to monitor for suspicious and malicious activity from external threats, they must also simultaneously be well-equipped to monitor for potential threats from within.

The start of the school year means millions of students and staff members will return to a school’s cloud environment. It also means massive amounts of data will flow into, within and out of that environment. Computers, laptops, and cloud applications like Google G Suite and Microsoft 365 are now as essential to a school supply list as notebooks, binders and pencils. Teachers and staff members use these cloud-based productivity applications as much as they do email, spreadsheets and word processing.

The fact is, schools today cannot function without these education-oriented cloud technologies and applications. At the same time, funding shortages mean that securing them is often not prioritized. But hackers are aware of this and schools should protect themselves moving forward.

Here are three ways to get the ball rolling:

1. Shift the focus to prevention, not mitigation

Most school districts have fewer than 2,500 students and don’t have a staff member dedicated to handle cyber security incidents. Because of this, schools have become a target.

But their mindset should shift from “if an attack happens” to “when an attack happens.”

Many schools across the U.S. have made the transition — or eventually will — to running classroom and administrative operations in the cloud. The problem, however, is that securing the cloud applications in the new cloud environment has been an afterthought. This means schools are leaving student data vulnerable to identity theft, fraud, and other emerging threats.

By shifting the focus to secure applications and data before an attack happens, rather than after, schools and other organizations in the education market will be better prepared to protect students, staff, and operations against an external attack or internal incident.

2. Minimize internal threats

The increase in adoption of cloud applications means schools must also improve their security posture to prevent an internal incident. K-12 schools that have recently transitioned to the cloud, or are still making the transition, may not realize cyber security means more than securing a network with firewalls and gateways. It also means securing the data within the cloud environment — even when an individual and device physically leaves the premises.

Verizon’s 2019 Data Breach Investigations Report found that nearly 32 percent of breaches involved phishing, 34 percent involved internal actors and that errors were causal events in 21 percent of breaches. Focusing on cloud application security as much as network or endpoint security will help minimize the internal threats that could occur throughout the school year and will help prevent sensitive data from leaving a school’s environment.

For example, a member of a school’s faculty could be at home and click on a phishing link in an email. That phishing link has now granted hackers access to the school’s cloud environment. Attackers are then able to pass through any firewall and gateway schools have in place and can download and share any files they want. Most worrying of all, schools may never know the breach took place unless the hacker discloses it (as typically seen in a ransomware attack).

3. Make data loss prevention a priority this year

Educational institutions must fulfill data security and privacy requirements mandated by specialized laws and regulations such as the Family Educational Rights and Privacy Act (FERPA), the Children’s Internet Protection Act (CIPA), the Children’s Online Privacy Protection Act (COPPA), and the Health Insurance Portability and Accountability Act (HIPAA).

They must also protect their own organizational data, including the personal and financial data of their employees, and usually do it all without having huge security budgets.

When thinking about data loss prevention, most think of tools and solutions. But while data loss prevention tools can monitor user activity to detect improper or unusual behavior, preventing data loss goes much deeper. Institutions must educate staff and students on the most common types of human error and the various threats they may come across. They must also plan and documented processes to be better prepared and protected.

Attackers are becoming more sophisticated in their attacks and it’s high time for schools to become more sophisticated in their defenses. Remember, security doesn’t have to be expensive or complicated, but configuring protections correctly and monitoring for vulnerabilities and potential breaches is essential.


from Help Net Security https://ift.tt/32a3jJA

How passwords paved the way for new technology


On July 15 we lost a major contributor to modern-day IT security – Dr. Fernando Corbato, the inventor of the password. Back in the early 1950s, computers could only do single processing jobs. Due to this limitation, it meant that multiple users could use the same system, but in essence, share all content. This meant that each computer was tied to a single user.

Dr. Corbato recognized this limitation and developed an operating system called CTSS (Compatible Time-Sharing System). CTSS allowed users to share and save work without impact on other users and users wouldn’t see each other’s files. This is where the password came to be. When users logged in with their password, they had access to their work and only their work. It was a brilliant invention and shaped the world as we know it. Passwords are everywhere.

A solid foundation for security

As we all know, the concept of the password has been largely unchanged since that time. We still require a user name and password to get access to our “stuff,” whether that be at work, home, or abroad. We have talked at length about the death of the password, yet it remains as prevalent as ever. You may ask yourself, “well, why?” The password is ubiquitous, and that’s because it works, and everyone ‘gets’ it.

With a password you don’t need any other device or apparatus to get access, it’s simple. But in its simplicity, there is significant risk. That risk is due in large part to management issues both on the back end (IT) and with the end-user. The risk to organizations comes in the form of a breach.

With today’s focus on privacy and the requirements to manage data correctly (GDPR, CCPA, etc.) this means that if a breach occurs, it more than likely will impact the organization in more ways than one, such as fines, loss of customers, damage to brand equity, etc. But I’m not telling you anything you don’t likely already know.

Is it enough?

As organizations transform into a digital environment, they face pressure on multiple fronts, not only from a governance and compliance standpoint, but also from elements such as IoT and customer experience. And with that, we need to change.

We need to change because there is so much more to protect, and frankly, much more at stake. We need to do a better job at securing everything (user, device, thing, service, etc.). And so, just as Dr. Corbato enabled us to “share” computers, we need to move on from his invention while remembering where we came from. The username combined with the password allowed users to share core processes and data, but as organizations continue with their digital transformations, they need to consider how to build secure environments from the onset.

The risk to the modern organization continues to drastically increase. Security starts with authentication and authorization. That needs to be done right, or it’s a non-starter from a security and privacy perspective. It’s no secret we are quickly outgrowing the simple username and password. We are approaching a zero-trust authentication model where we need to employ new methods to grant the right access, to the right things at the right time, with the right experience, to reduce organizational exposure and risk.

We need start now.

Looking to the future

How do we celebrate what was, and look to what’s next? Single-sign on (SSO) and federation were a big step in user authentication, but how are we going to bring divergent and siloed infrastructures – that most organizations are built on – into a manageable secure infrastructure and ensure a tailored user experience and device-to-device engagements with the right balance of security? That’s the magic question.

We have many options available, but technology adoption that supports the different authentication methods isn’t wide spread. Every organization is different, and as a result, needs different authentication methods. Therefore, there isn’t a single option that is as pervasive as the password is and was.

As for me, I believe the smart device is the key. As they evolve, they will be the key to widespread access and authentication method adoption. Most people have at least one device they never leave home without and manufacturers continue to pack tech and privacy controls into these devices. I believe these hold the key to frictionless access and enhanced security. They are becoming universal, and in my opinion, the next step in the security evolution.

Thank you, Dr. Fernando Corbato, for an invention that changed the face of security and for building the foundation from which we can take the next step!


from Help Net Security https://ift.tt/2ZFlN7f