Friday Squid Blogging: Squid Empire Is a New Book

Regularly I receive mail from people wanting to advertise on, write for, or sponsor posts on my blog. My rule is that I say no to everyone. There is no amount of money or free stuff that will get me to write about your security product or service.

With regard to squid, however, I have no such compunctions. Send me any sort of squid anything, and I am happy to write about it. Earlier this week, for example, I received two -- not one -- copies of the new book Squid Empire: The Rise and Fall of Cephalopods. I haven't read it yet, but it looks good. It's the story of prehistoric squid.

Here's a review by someone who has read it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.


from Schneier on Security http://ift.tt/2fy0fR4

Equifax mea-culpas with free credit “locks” forever


Equifax’s mea-culpa-ing by offering free credit locks for life starting on 31 January.

These are not credit freezes, mind you. No, Equifax is giving away credit padlocks that it says are a new service.

We don’t know much about the credit locks outside of what Equifax’s new interim CEO, Paulino do Rego Barros Jr., said in an editorial published by the Wall Street Journal on Wednesday, the day after he was appointed.

Barros got his new gig the same day that Equifax’s previous CEO, Richard Smith, washed his hands and walked away from the embarrassing mess. That is, Smith washed his hands, but he didn’t wash off the $18 million pension he took with him after his 12 year tenure.

Barros said the credit locks will be easy for consumers to lock and unlock, unlike credit freezes, which require PINs (yes, those PINS) to unlock … and which stop thieves dead in their tracks … and which cost the credit bureaus money they’d otherwise make by banks, credit card companies, cell phone companies or the like pulling customers’ credit reports, as the New York Times explains.

The data monger has a lot to mea culpa about. The credit lock freebie-4-ever comes three weeks after Equifax’s breach affected about half of everybody in the US, 400,000 in the UK and 100,000 in Canada.

…mind you, it was a breach that was enabled by a critical RCE (Remote Code Execution) flaw for which patches had been available for two months before the mid-May attack.

Equifax has been pratfalling ever since, as Barros is well aware.

As ZDNet’s Zack Whittaker reported, a XSS (Cross-site scripting) vulnerability was found in Equifax’s fraud alerts website—a flaw that could be used in phishing emails to trick consumers into turning over personal data.

And there was that leaky customer portal in Argentina – username ‘admin’, password ‘admin’.

It just kept getting more and more pratfally: There were the woeful PINs that put frozen credit files at risk, and then too there was Equifax’s not-so-neat party trick of ditching its tried, trusted equifax.com domain and instead putting its breach info site onto the easy to typosquat and bafflingly convoluted domain equifaxsecurity2017.com … a convoluted domain name which it proceeded to scramble at least 3 times, sending customers to a fake phishing site for weeks.

Beyond the pile of cyber D’oh!, there were insufficient, underprepared operators at the call centres, leaving alarmed customers facing delays and agents who couldn’t answer questions.

There’s no excuse for any of it, Barros said in his editorial. The company is adding agents and getting them trained, and he’s getting a daily update on the situation.

As well, Equifax is going to fix that problematic site of theirs. If it can’t fix it, it’s going to build a new one from scratch, Barros said. It’s also extending the window to sign up for free credit freezes and its TrustedID Premier credit monitoring service, both of which you can sign up for through the end of January.

I’m sure Equifax is sincerely sorry about this mess. But here’s the thing: given its track record, would you trust the company’s new credit lock service? From the NYT’s Ron Lieber:

This is the same company … that could not create a functioning website for people worried about whether thieves had stolen their Social Security numbers. People who have been trying to freeze their files have run into too many problems to name, and many of them do not yet have PINs. I’ve received hundreds of emails complaining about Equifax’s basic dysfunction.

Why does Equifax even need a new service? Why can’t it just give free credit freezes for life?

Lieber sent Equifax 18 questions that we still need answered, including:

Whether Equifax will force people to submit to mandatory arbitration or some other loss of privileges or rights in exchange for free locks for life. Or whether your name will end up on lists for various offers of credit. This is how TransUnion’s similar free service works, one that it’s been pushing hard at people who have come to its website looking for a credit freeze in the wake of the Equifax hack.

Good questions. As Mother Jones has noted, credit freezes or credit locks come with strings. Transunion’s Disclaimers and Warranties suggest that in order to interact with the company at all, you have to absolve them of liability for anything that might happen to your data on their watch.

Transunion, by the way, also has credit locks, and they’re definitely not free. I tried to set one up, it looks like I was heading toward a $19.99/month credit monitoring bleed.

Will the free credit locks cause the other credit bureaus to follow suit? I’m not holding my breath. At any rate, I want my $5 back. I want all my $5 payments back: as a citizen of Massachusetts, that’s how much I had to fork over to Transunion and to Experian to freeze my credit at those bureaus, all on account of Equifax’s pratfall. People in other states have had to shell out even more.

I called Equifax’s “We’re sorry, we’re sorry, we’ve got enough phone operators on hand now, we swear!” number to ask if Equifax had any intention of refunding customers the money we’ve had to fork over because of its breach.

Its trained operators might not have been trained to handle that one yet: the answer was a stammered “I haven’t heard of anything like that…”

No, I’m not surprised. Again, I’m not holding my breath on that one, either.



from Naked Security http://ift.tt/2xGNqfr

Signal app’s address book security could upset governments


Signal, arguably the world’s most respected secure messaging app, plans to use the DRM (Digital Rights Management) secure enclave built into Intel’s Skylake chips as a way of hiding away how people are connected.

It sounds esoteric, but it fixes an important privacy weakness that has dogged end-to-end encrypted messaging: users want to know who else they know that uses the same service. This requires that apps check who else among a person’s contacts uses it by consulting a central “social graph” of how people are connected.

This is a privacy compromise because it means that while the service’s own encryption stops it from reading your messages (or letting intelligence agencies that later ask for access to this data read them either) it can end up knowing a lot about who you know.

Signal tries to counteract this by not maintaining its own centralised social graph but instead using yours: your address book.

To find out if someone you know uses Signal the app turns their number into a truncated SHA256 hash first, and matches it against a central directory of hashes (this is similar to the way that password authentication works.) Anyone intercepting the traffic or hacking the directory will see hashes rather than telephone numbers.

The only way for a hacker with stolen hashes to figure out what telephone numbers they’ve got is to guess. Guess a number, run it through the hashing algorithm and see if it matches one that you’ve stolen. If it doesn’t match anything, guess another number, and another, and another… and so on until you find a match.

There is a problem with this scheme (to quote Signal’s developers Open Whisper Systems) because the “pre-image space” for 10-digit numbers is small, “inverting these hashes is basically a straightforward dictionary attack”, which is another way of saying that it’s feasible for a computer to make guesses quickly and cheaply enough to compromise the security of the hashes.

Signal doesn’t keep any record of the lookups it’s performed and allows you to satisfy yourself that it doesn’t by giving you access to its source code:

…if you trust the Signal service to be running the published server source code, then the Signal service has no durable knowledge of a user’s social graph if it is hacked or subpoenaed.

But who’s to say that it’s the published server source code that’s actually running on Signal’s server rather than some version of it that’s been modified by a hacker or the demands of an intelligence agency?

…someone who hacks the Signal service could potentially modify the code so that it logs user contact discovery requests, or (although unlikely given present law) some government agency could show up and require us to change the service so that it logs contact discovery requests.

Open Whisper Systems’ founder Moxie Marlinspike thinks the Software Guard Extension (SGX) instruction built into Intel chips as a secure enclave for Digital Rights Management (DRM) offers a way out of the problem, and has integrated it into a new Signal open source Beta.

This is similar to ARM’s TrustZone technology that forms the basis of Samsung’s Knox security system, but was designed with DRM-oriented features such as “remote attestation”.

Remote attestation is normally used by content providers to verify that you and I are running the software we are permitted to, software that will respect DRM restrictions, rather than something that can pirate the content it’s playing.

In Signal’s case this arrangement is inverted. The enclave is on its server rather than on your device and remote attestation allows you, the client, to attest that the server is running a squeaky clean copy of Signal’s software.

Furthermore, because the verified copy of Signal’s software is running in an enclave, neither it nor the messages that pass between you and the enclave can be interfered with by other software on the server.

A practical hurdle to this is SGX’s 128MB RAM limit, which sounds like a lot of protected memory for a microprocessor but is nowhere near enough to hold a database that might contain billions of hashes.

Not to mention:

Even with encrypted RAM, the server OS can learn enough from observing memory access patterns … to determine the plaintext values of the contacts the client transmitted!

Open Whisper Systems’ solution is to perform “a full linear scan across the entire data set of all registered users for every client contact submitted,” which is to say access lots of hashes in the database so anyone with control of the OS can’t detect a pattern.

For any sizable user base, this would be incredibly slow if it had to be done for every user, almost every time they connect to the service (messaging apps perform regular checks in case new users appear).

To avoid this turning into a computer science lecture, we’ll sum up Marlinspike’s proposed solution by saying that it is based around disordering the way hashes are stored within the hash table to make it harder to carry out surveillance on them.

Does any of this matter beyond this one app?

Undoubtedly. Signal’s user base is small but where Signal goes, other secure messaging apps have a habit of following, including as mentioned above, WhatsApp and Facebook Messenger with their billion or more users. Since adopting Signal’s underlying platform in 2016, both appear to be implementing its innovations over time.

We don’t know whether this will include using server-side SGX enclaves, but if it does it could provoke a response from governments already questioning the use encrypted messaging.

App companies want to preserve user privacy for complex reasons we’ve written about before, including a desire not to turn into large-scale surveillance platforms for global governments in ways that might hurt their popularity.

But the bottom line is clear: losing access to address book metadata will not go down well with the powers that be.



from Naked Security http://ift.tt/2ydKmdN

Android malware ZNIU exploits DirtyCOW vulnerability

Thanks to Jagadeesh Chandraiah of SophosLabs for his behind-the-scenes work on this article.

Last year, we told you about DirtyCOWa privilege escalation bug in the Linux kernel that allows ordinary users to turn themselves into all-powerful root users. It soon became clear that DirtyCOW didn’t just affect Linux running on Intel processors but was also exploitable on Android (a modified version of Linux) running on ARM chips too.

This raised the possibility of DirtyCOW being used to compromise phones and tablets.

SophosLabs has now found malware, dubbed ZNIU, that does exactly that.

Enter ZNIU

Victims have to stray beyond the safety of the Google Play walled garden to get ZNIU, so attackers trick them into downloading infected apps from untrusted sources with old-fashioned social engineering. This example of ZNIU comes packaged as a porn app:

ZNIU

After being installed the app exploits DirtyCOW to elevate its privileges, bypassing the restrictions that would normally stand in its way. Once it infects a device, ZNIU chats with its command and control servers to receive updates and orders. Command-and-control servers are consulted whenever the infected device is connected to a power outlet or there’s a change in connectivity.

Malicious code contained in APKs (Android Application Packages) is downloaded from remote servers and executed at runtime in the hope of avoiding early detection from malware scanners.

ZNIU also creates a backdoor that can be used for future remote-controlled attacks and has the ability to send SMS messages, which opens the door for money making schemes such as sending spam, phishing or messaging premium rate numbers owned by the attacker.

The malware also collects device data:

ZNIU collecting device details

The DirtyCOW vulnerability

By successfully exploiting the DirtyCOW bug (known officially as CVE-2016-5195), ZNIU is able to grant itself all the permissions it needs to do harm without having to ask the user, or trick them.

The bug is explained in the Red Hat bug database like this:

A race condition was found in the way Linux kernel’s memory subsystem handled breakage of the read only private mappings COW situation on write access.

An unprivileged local user could use this flaw to gain write access to otherwise read only memory mappings and thus increase their privileges on the system.

In other words, the Linux Copy On Write mechanism can be tricked into overwriting a read-only file, which is something of a security catastrophe if that read-only file is a critical system executable or configuration file.

For a comprehensive explanation of DirtCOW, check out Paul Ducklin’s excellent article – Linux kernel bug: DirtyCOW “easyroot” hole and what you need to know.

What to do

The good news is that ZNIU isn’t available on Google Play and only works on devices running older versions of Android that aren’t patched against DirtyCOW. 

Google released a patch for Android way back in December but, sadly, the Android ecosystem is badly fragmented and whether or not you get updates is up to your vendor, not Google. Different vendors will release patches at different times and some may not release them at all.

So just because there’s a patch for DirtyCOW, that doesn’t mean you’ve got it. What we can say for sure is that users of the latest version of the Android operating system, Oreo, have nothing to worry about and that Sophos Mobile customers are protected (Sophos detects ZNIU as Andr/Rootnik-AI and Andr/ZNIU-A.)

If you’re concerned about your device, please contact your vendor.

Another tale of patches unapplied

History is littered with cases where attacks and outbreaks have happened because patches were available but weren’t applied. There’s no better example in recent history than the WannaCry outbreak in May 2017. At the time, we noted that it its spread was made possible in part by the unheeded lessons of the past, such as Slammer and Conficker.

The DirtyCOW hole was plugged a year ago so please make sure you have the latest security updates on your phone or tablet.



from Naked Security http://ift.tt/2x26Tp5

Is your Mac software secure but firmware vulnerable?

Mac users who have updated to the latest OS version or have downloaded and implemented the most recent security update may not be as secure as they originally thought, Duo Security researchers have found.

Mac software secure firmware vulnerable

That’s because many of them did not receive the newest firmware along with OS and software updates.

Why is keeping your firmware up-to-date important?

EFI firmware (Intel’s implementation of the Unified Extensible Firmware Interface – UEFI) is present on all Macs. It bridges the system’s hardware, firmware and OS together to enable it to go from power-on to booting the operating system.

Known attacks against vulnerable EFI firmware include Thunderstrike 1, Thunderstrike 2, Sonic Screwdriver, and Direct Memory Access.

“In a modern system, the EFI environment holds particular fascination for security researchers and attackers due to the level of privilege it affords if compromise is successful,” the researchers explained.

“EFI is often talked about as operating at privilege level ring -2, which indicates it is operating at a lower level than both the OS (ring 0) and hypervisors (ring -1). In a nutshell, this means that attacking at the EFI layer gives you control of a system at a level that allows you to circumvent security controls put in place at higher levels, including the security mechanisms of the OS and applications.”

What’s more, once a system has been compromised in this way, it’s difficult to clean it. Even wiping the hard disk completely wouldn’t remove this kind of compromise, they pointed out.

Research findings

The researches have spent the last few months analyzing over 73,000 Mac systems deployed in organizations across a number of industry verticals, and found that 4.2% were running versions of firmware that did not match the versions we would expect them to, which could leave them open to publicly disclosed vulnerabilities.

“The level of discrepancy increased significantly above the mean for certain Mac models, with the highest being 43.0% for the iMac 21.5” late 2015 model where 941 out of 2190 real world systems were running incorrect versions of EFI firmware,” they noted.

“The size of this discrepancy is somewhat surprising, given that the latest version of EFI firmware should be automatically installed alongside the OS updates. As such, only under extraordinary circumstances should the running EFI version not correspond to EFI version released with the running OS version.”

Apple has begun releasing EFI updates bundled with OS and security updates in 2015. The security support provided for EFI firmware depends on the hardware model of a Mac, as well as on the version of the OS a system is running. In theory, all machines should automatically be receiving the latest EFI updates, but this research has proven that the process is not foolproof.

“The sheer number of affected systems alongside the manner in which they cluster depending on OS and hardware version gives us confidence that the anomalies are not purely a result of user error on the part of system owners and it is, in fact, reflective of some kind of failure in the way EFI firmware updates are installed,” they noted. “Not every method of updating OS X/macOS is equivalent and some methods are seemingly not able to update the EFI firmware.”

Unfortunately, users and administrators are not notified if the EFI update process fails. “Compounding this issue further is that without manually carving up an OS update package and knowing the undocumented commands you have to run to update an EFI firmware image, there is no official way to update the EFI image without a full reinstall of the OS update,” they added.

What can you do?

Rich Smith, Director of R&D at Duo Security, advises Mac users and admins to check if they’re running the latest version of EFI for their system(s). They can do so by using EFIgy, a free open-source tool soon to be made available by the company.

He also advises updating to macOS 10.12.6 or later. “This will not only give you the latest versions of EFI firmware released by Apple, but also make sure you’re patched against known software security issues as well,” he pointed out.

If, for hardware reasons, you can’t do that, you may be out of luck and not be able to run the most up-to-date EFI firmware. In that case, you should consider using EFIgy to check whether your current version of EFI is exposed to a currently known EFI vulnerability (this functionality will be released soon, Smith says).

“As these attacks are ones that are used by sophisticated adversaries it is important to understand whether you or your organisation is one that includes this kind of adversary in your threat model. If you do consider advance attacks to be something you proactively protect against, then it’s well worth considering how a system with a compromised EFI could impact your environment as well as how you would be able to attest to the integrity of the EFI firmware of your Macs. In many situations, answers to those questions would be ‘badly’ and ‘we probably wouldn’t be able to,’” he noted.

“In those situations, it would be well worth considering replacing Macs that cannot have updated EFI firmware applied, or moving them into roles where they are not exposed to EFI attacks (physically secure, controlled network access). While EFI attacks are currently considered both sophisticated and targeted, depending on the nature of the work your organization does and the value of the data you work with, it’s quite possible that EFI attacks fall within your threat model. In this regard, vulnerability to EFI security issues should carry the same weight as vulnerability to software security issues. You’ll need to determine if you can accept the risk of having vulnerable (and potentially unpatchable) systems in your environment. In general we would not advocate for the average user to throw away their Mac because their EFI environment is not being security supported by Apple.”

The whitepaper detailing the research also includes a granular breakdown of the Mac systems running unexpected firmware versions, as well as list of models that had no EFI updates between OS versions 10.10.0 to 10.12.6.

What is Apple doing about this?

The researchers shared their findings with Apple, gave them previews of the paper and made the raw data available to them.

“Interactions with Apple have been very positive and they seemed to genuinely appreciate the work and agreed with our methodologies, findings and conclusions,” Smith told Help Net Security.

“Despite the issues we found, we truly believe that Apple is leading the way in terms of taking EFI security seriously. They have continued to take steps forward with the release of macOS 10.13 (High Sierra). They have a world class firmware security team and we are excited to see the new security approaches they will take in future to keep the EFI environment even more secure.”


from Help Net Security http://ift.tt/2fDE4MS

Deloitte Hacked

The large accountancy firm Deloitte was hacked, losing client e-mails and files. The hackers had access inside the company's networks for months. Deloitte is doing its best to downplay the severity of this hack, but Bran Krebs reports that the hack "involves the compromise of all administrator accounts at the company as well as Deloitte's entire internal email system."

So far, the hackers haven't published all the data they stole.


from Schneier on Security http://ift.tt/2fDfkVn

iPhone X Face ID baffled by kids, twins, siblings, doppelgängers


Youngsters! Pfft. They all look alike!

No, really, they do if you’re the Face ID facial recognition system in Apple’s iPhone X. Specifically, twins, siblings and look-alikes can trip false authentications. Growing kids, with their morphing faces, also baffle the biometric authentication.

Apple said so in a guide (PDF) about Face ID security that it published on Wednesday.

Overall, Face ID is pretty resistant to letting the wrong person log into your phone, Apple said. The possibility of a random person being able to unlock your phone by looking at it is about 1 in 1 million. Not bad, particularly when you compare it with Touch ID, which can be fooled approximately 1 in 50,000 times, Apple says.

But the odds go out the window once you throw in twins, siblings, pre-teens and evil doppelgängers. From Apple’s security guide:

The probability of a false match is different for twins and siblings that look like you as well as among children under the age of 13, because their distinct facial features may not have fully developed. If you’re concerned about this, we recommend using a passcode to authenticate.

Of course, you don’t have to use Face ID instead of a passcode. And as we noted recently when covering how features in the new iOS 11 will perhaps create fresh headaches for law enforcement, there are reasons why you might prefer to have your phone set up to require a passcode over a biometric sign-on.

Namely, the history of court decisions in the US has tended to lean toward granting Fifth Amendment protection against forcing people to give up their passcodes, given that a passcode is something you know, and the Fifth Amendment protects people from testifying against themselves.

Similar thinking has meant that biometrics, including Touch ID, involve something you are, not something you know, making it kosher to force unlocking with finger swipes as far as the courts are concerned. (N.B. There are court decisions and court actions that haven’t synced up with those interpretations, including the ex-cop who’s suspected of child abuse image trafficking, won’t or can’t give up his passcodes, and is being jailed indefinitely until he does.)

At any rate, even if you do opt to use Face ID – granted, it can be a time-saver if your passcode is as pleasingly plump and considerably complex as it really should be – there are plenty of times when you still have to use a passcode to authenticate on the iPhone X. In its attempt to clarify questions about the security around iPhone X, Apple says you’re required to use a passcode when…

  • The device has just been turned on or restarted.
  • The device hasn’t been unlocked for more than 48 hours.
  • The passcode hasn’t been used to unlock the device in the last 156 hours (six and a half days) and Face ID has not unlocked the device in the last 4 hours.
  • The device has received a remote lock command.
  • After five unsuccessful attempts to match a face.
  • After initiating power off/Emergency SOS by pressing and holding either volume button and the side button simultaneously for 2 seconds.

That’s a list worth paying attention to. Given the soaring rate of forced warrantless device searches at the US border, it’s good to know how to quickly turn off Face ID (though do bear in mind that US Customs and Border Patrol guards may not take kindly to a lack of cooperation).

According to Gadget Hack, which says it gets its info from Craig Federighi, Apple’s senior vice president of software engineering, to disable Face ID in a pinch, just grip the buttons on both sides of an iPhone X when handing it over to another person:

Since a screenshot uses the side button and volume up together, we can assume he means you’d need to press all three buttons – side, volume up, and volume down – simultaneously. We’re not sure how long they would have to be pressed, but it should not take very long since speed is necessary when handing your device over.

On all other iPhone models in iOS 11, press the power button 5 times in a row to activate Emergency SOS, which will quickly disable Touch ID until a passcode is entered.

At any rate, one of the major questions asked about iPhone X Face ID hasn’t been about kids or siblings, per se; rather, it’s about facial recognition algorithms that are trained by white people on mostly white faces. Facial recognition algorithms have hence been found to be less accurate at identifying black faces.

According to its security guide, Apple has taken that into account. The company says that its facial recognition neural networks have been trained with over a billion images, representing people from around the world who hail from different genders, ages, ethnicities, and other factors. The networks have also been designed to work with hats, scarves, glasses, contact lenses, and many sunglasses, be the faces indoors or outdoors, or even if they’re in complete darkness.

Apple says that it’s also devoted an additional neural network that’s specifically been trained to spot and resist spoofing attacks via photos or masks.

Those who are nervous about the privacy of their facial biometrics will be glad to hear that face data won’t be leaving the iPhone X. It won’t be backed up by iCloud, for instance, which is good to hear, given how Apple’s online backup is targeted by so many creeps who phish passcodes in an attempt to get at intimate material in iCloud.

From the security guide:

Face ID data doesn’t leave your device, and is never backed up to iCloud or anywhere else. Only in the case that you wish to provide Face ID diagnostic data to AppleCare for support will this information be transferred from your device.



from Naked Security http://ift.tt/2xHXDZi

Cybercriminals increasingly focusing on credential theft

Criminal tactics used to access user credentials are growing in prevelance, and that a record 47 percent of all malware is new or zero day, and thus able to evade signature-based antivirus solutions, according to WatchGuard.

credential theft

Malware detection by region

“From JavaScript-enabled phishing attacks and attempts to steal Linux passwords, to brute force attacks against web servers, the common theme here is that login access is a top priority for criminals. Knowing this, businesses must harden exposed servers, seriously consider multi-factor authentication, train users to identify phishing attacks and implement advanced threat prevention solutions to protect their valuable data,” said Corey Nachreiner, CTO at WatchGuard Technologies.

Mimikatz accounts for 36 percent of the top malware

A popular open source tool used for credential theft, Mimikatz made the top 10 malware varients list for the first time this quarter. Often used to steal and replace Windows credentials, Mimikatz surfaced with such frequency that it earned the top malware variant of Q2. This new addition to the familiar group of top malware varients shows that attackers are constantly adjusting tactics.

Phishing attacks incorporate malicious JavaScript to fool users

For several quarters, attackers have leveraged JavaScript code and downloaders to deliver malware in both web and email-based attacks. In Q2, attackers used JavaScript in HTML attachments to phishing emails that mimic login pages for popular legitimate sites like Google, Microsoft and others to trick users into willingly giving up their credentials.

Attackers target Linux passwords in Northern Europe

Cyber criminals used an old Linux application vulnerability to target several Nordic countries and the Netherlands with attacks designed to steal password hash files. More than 75 percent of attacks leveraging a remote file inclusion vulnerability to access /etc/passwd were aimed at Norway (62.7 percent) and Finland (14.4 percent). With such a high volume of incoming attacks, users should update Linux servers and devices as a basic precaution.

credential theft

Network attack detections by region

Brute force attacks against web servers climb

This summer, attackers used automated tools against web servers to crack user credentials. With the heightened prevalence of web-based attacks against authentication in Q2, brute force login attemps against web servers were present among the top 10 network attacks. Web servers without protections that monitor failed logins leave automated attacks unchecked to guess thousands of passwords each second.

Nearly half of all malware is able to circumvent legacy AV solutions

At 47 percent, more new or zero day malware is making it past legacy AV than ever before. The data shows that older, signature-based AV is increasingly unreliable when it comes to catching new threats, illustrating the need for behavioral detection solutions in order to catch advanced persistent threats.


from Help Net Security http://ift.tt/2yL8Sjw

How to keep your cryptocoins safe?

Intrigued by the many possibilities of cryptocurrencies – not least by the prospect to “earn” serious money while doing nothing – you’ve decided to take the plunge and invest in some.

But do you know how to keep your investments safe in the Wild West that is currently the cryptocurrency market?

how to keep cryptocoins safe

What you need to worry about

Setting aside the risks that could arise from the current legal and regulatory murkiness regarding virtual currencies (and cryptocurrencies as a subset of that category), your biggest worry at the moment should be the potential theft of your assets.

The list of dangers is long:

Safety measures

There’s not much you can do about bugs and insecure services, but you can do things to protect the assets you have in your hand (so to speak).

For one, be extra careful when investing in projects and companies through initial coin offerings (ICOs).

The popularity of this fundraising method has exploded, but as we’ve recently witnessed, crooks can find ways to pocket the money meant for the startups. ICO-related scams (fake ICOs) are also a thing.

Secondly, you can protect your digital assets by keeping them in “cold storage”:

  • In a paper wallet
  • In an offline hardware wallet
  • A data storage device inaccessible to anyone except you (e.g. USB, disk drive)
  • Online but encrypted (with the encryption key offline).

All of these options have their pros and cons, but keeping most of your assets in them is safer than keeping them in software wallets, hosted wallets, or wallets tied with accounts of online exchanges.

Keys for software wallets can be easily stolen (phishing, malware), and third-party wallet services and exchanges (that hold your private keys) can be hacked.

Of course, there will come a time when you’ll have to use these services to carry out transactions, but definitely don’t keep the majority of your stash in them on a day-to-day basis.

Be on the lookout for phishing or scam emails and messages on social media and online forums. Projects like my MyEtherWallet team’s database of active scams can help you identify scam and phishing sites.

Another thing that should prevent you falling for phishing schemes: bookmark your crypto sites, and always use those bookmarks to reach them.


from Help Net Security http://ift.tt/2xHgkw3

New infosec products of the week​: September 29, 2017

Fortanix launches runtime encryption using Intel SGX

Fortanix’ Self-Defending Key Management Service (SDKMS) is a cloud service delivering runtime encryption technology to protect applications and data during use. Runtime encryption allows general-purpose computation on encrypted data without exposing sensitive data to untrusted operating systems, root users, cloud providers, or malicious insiders.

infosec products september 2017

Manage real-time change detection for global IT environments

Qualys released its highly scalable and centralized File Integrity Monitoring (FIM) Cloud App, which logs and centrally tracks file change events across global IT systems and a variety of enterprise operating systems to provide customers a way to achieve centralized cloud-based visibility of activity resulting from normal patching and administrative tasks, change control exceptions or violations, or malicious activity — then report on that system activity as part of compliance mandates.

infosec products september 2017

End-to-end visibility for public cloud platforms

Ixia has further extended the CloudLens Visibility Platform to include support for Microsoft Azure, Google Cloud Platform, IBM Bluemix, and Alibaba Cloud, in addition to the existing support for AWS, and for both Windows and Linux. CloudLens was designed from the ground up to retain the benefits of the cloud – elastic scale, flexibility, and agility, while enabling security, analytics, and forensics tools to acquire the needed data, whether the tool is in a private data center or public cloud.

infosec products september 2017

Twistlock releases Twistlock 2.2 with Incident Explorer

The latest release of Twistlock focuses on advanced threat analytics and prevention and includes several machine learning driven layers such as a Cloud Native Network Firewall and Incident Explorer. In addition, the release provides runtime defense down to the host OS and delivers comprehensive compliance monitoring and enforcement for Kubernetes.

infosec products september 2017

Cloud-based logging service to enable innovative security applications

Palo Alto Networks announced its new cloud-based Logging Service, which allows customers to amass large amounts of their own data from the Palo Alto Networks Next-Generation Security Platform. Logging Service provides a centralized and scalable logging infrastructure without operational overhead, allowing customers to collect log data without local compute and storage limitations.

infosec products september 2017

XT Access Manager: Privileged account management and automation

Xton Technologies released the XT Access Manager (XTAM),a PAM platform that combines a secure identity vault, session management with recording and automated password resets. Customers also can take advantage of features including delegated script execution, discovery of privileged accounts and extensive reporting for network computers and IoT devices.

infosec products september 2017

SecurityFirst delivers scalable and transparent data-centric protection

DataKeep is a data-centric security software solution comprised of a management console and encryption agents. DataKeep enables accelerated encryption at scale, protecting data from the source of creation in the OS all the way through, and including, data storage, regardless of whether data resides on-premise, in the cloud or a virtual environment.

infosec products september 2017

Natural language intelligence software enables anyone to ask security questions

Insight Engines Cyber Security Investigator (CSI) for Splunk lets users ask questions of datasets using natural language. Its Splunk application lets anyone in an organization detect, investigate, and visualize cyberthreats – even if they don’t have expertise in Splunk Search Processing Language (SPL).

infosec products september 2017

Threatcare builds AI-based virtual cybersecurity professional

Similar to a Siri or Alexa for cybersecurity, Violet has machine learning and Neuro-Linguistic Programming (NLP) capabilities and can answer questions and take commands from security analysts who are looking to find and fix urgent threats in their networks. Violet is the world’s first virtual cybersecurity professional that aims to replace multiple team members, offering continuous reconnaissance to give an attacker’s view of an organization.

infosec products september 2017


from Help Net Security http://ift.tt/2xKYVnX

Activists targeted with barrage of creative phishing attempts

More often than not, the human element is the weakest link in the security chain. This fact is heavily exploited by cyber attackers, and makes phishing and spear-phishing attempts the most likely and most effective method to start an attack.

If the attackers are after a specific target there’s seemingly no end to the different lures they can come up with, as digital civil liberties activists at Free Press and Fight For the Future have recently witnessed.

creative phishing attempts

The campaign

According to Electronic Frontier Foundation’s technologists Eva Galperin and Cooper Quentin, between July 7th and August 8th of 2017 the activists were hit with almost 70 spearphishing attempts aimed at stealing Google, Dropbox, and LinkedIn credentials.

“The attackers were remarkably persistent, switching up their attacks after each failed attempt and becoming increasingly creative with their targeting over time,” they noted.

The phishing emails ran the gamut from generic to extremely targeted, and tried to exploit the targets’ curiosity, anxiety, embarrassment or anger. Here are a few examples:

  • Generic emails supposedly sent by co-workers, with links to view a document or invitation
  • Emails with clickbait headlines appealing to the political interests of the targets or with lurid subjects aimed to embarrass the recipient into clicking a fake unsubscribe link. This latter approach also included fake confirmations of subscription to adult sites
  • Emails made to look like they were sent by members of the targets’ family, with links that ostensibly lead to shared family photos
  • Requests for links to specific content (e.g. the target’s music available online). The attacker replied to the sent information and claimed the link did not work correctly – but replaced it with one that pointed to a Gmail phishing page
  • An email made to look like it was coming from a YouTube user that commented (aggressively and hatefully) on a real YouTube video that the target had uploaded.

“The sophistication of the targeting, the accuracy of the credential phishing pages, the working hours, and the persistent nature of the attacks seem to indicate that the attackers are professionals and had a budget for this campaign,” the technologists shared.

“Although this phishing campaign does not appear to have been carried out by a nation-state actor and does not involve malware, it serves as an important reminder that civil society is under attack. It is important for all activists, including those working on digital civil liberties issues in the United States, to be aware that they may be targeted by persistent actors who are well-informed about their targets’ personal and professional connections.”

Thwarting phishers

Luckily, there is a simple way for foiling this type of attack: enable two-factor authentication on all important accounts.

In fact, activists are not the only ones who should enable 2FA where possible. Seeing that our accounts often contain sensitive information that we wouldn’t want to see compromised and that hijacked accounts can effectively be used for further phishing and scam attempts, everybody should set it up.


from Help Net Security http://ift.tt/2yxtkUn

Inadequate IT processes continue to create major security and compliance risks

The results of a study of more than 900 IT security professionals, conducted by Dimensional Research, spotlights how common security best practices – such as timely removal of access to corporate data and applications, dormant account identification, and role administration – continue to be a challenge and concern for organizations worldwide.

How confident are you that all former users are fully de-provisioned in a timely manner?

inadequate IT processes

Deprovisioning accounts

Most alarmingly, 70 percent of respondents express a lack of confidence that all former employees and employees changing roles are fully deprovisioned – or have their accounts changed or removed – in a timely enough manner. Therefore, their accounts remain open and available with active authorization even after an employee changes roles or leaves the organization.

Only 14 percent say they remove access for users immediately upon a change in HR status. Related findings point to concerning practices regarding management of dormant accounts. Only nine percent are confident that they have no dormant accounts, only 36 percent are “very confident” they know which dormant user accounts exist, and a remarkable 84 percent confessed that it takes a month or longer to discover these dangerous open doors into the enterprise.

Best practices demand that access be removed for employee accounts that are no longer active. In the case where an employee changes roles, access needs to be altered to provide the new access and authorization required for the new role and remove access that is no longer needed. Oftentimes, the removal of no-longer-needed access is overlooked. When user accounts are not deprovisioned (often called dormant accounts), they are open invitations for disgruntled employees, hackers or other threat actors, who can exploit the accounts and gain access to sensitive systems and information, resulting in data breaches or compliance violations.

Inadequate IT processes everywhere

The user account access and management challenges are not limited to legacy systems and data, as they also are relevant for newer technologies such as file-sync-and-share services like Box and Dropbox. Only 14 percent of respondents report deprovisioning access to these accounts in a centralized/automated manner.

Other findings provide further evidence of the challenges organizations face with regard to managing employee access to IT resources:

  • Only one in four are “very confident” that user rights and permissions in their organizations are correct for the individuals’ roles.
  • Seventy-one percent are concerned about the risk represented by dormant accounts.
  • Ninety-seven percent have a process for identifying dormant users, but only 19 percent have tools to aid in finding them.
  • Only 11 percent audit enterprise roles more frequently than monthly.

“Today, when employees leave an organization or change roles within the same organization, it’s more critical than ever that any access rights to the corporate network, systems, and data are revoked or modified to match their new status,” said John Milburn, president and general manager of One Identity. “The overwhelming lack of confidence that organizations are doing this in a timely manner means they are still grappling with these same critical issues, offering up a gaping security hole for former employees, or hackers to exploit those identities, and wreak havoc for hours, weeks or even months to come. Those that don’t finally get this under control are more likely than ever to suffer a significant breach, and all of the resulting major impacts on reputation, brand, and stock valuation.”

Does your enterprise have dormant users, where the accounts associated with the identities are not being used?

inadequate IT processes

Credential-based attack vectors

One of the easiest ways for malicious outsiders, or even insiders, to gain access into an organization’s IT network is by stealing user credentials such as user names and passwords. Once access is secured, a series of lateral movements and privilege escalation activities can procure access to the type of information and systems that are most coveted by bad actors, such as a CEO’s email, customer or citizen personally identifiable information, or financial records.

The more time inactive accounts are available to bad actors, the more damage can potentially be done, including data loss, theft and leakage, which could end up in irreparable damage to reputations, compliance violations, as well as possibly large fines and a significant drop in stock valuation.


from Help Net Security http://ift.tt/2xHflvI

Company directors are increasingly involved with cybersecurity

According to a new survey by BDO USA, 79% of public company directors report that their board is more involved with cybersecurity than it was 12 months ago and 78% say they have increased company investments during the past year to defend against cyber-attacks, with an average budget expansion of 19 percent.

company directors cybersecurity

This is the fourth consecutive year that board members have reported increases in time and dollars invested in cybersecurity. Despite this positive progress, the survey also found that businesses continue to resist sharing information on cyber-attacks with entities outside of their company. Just one-quarter are sharing information gleaned from cyber-attacks with external entities – a practice that needs to become more prevalent for the safety of critical infrastructure and national security.

“The survey also reveals a significant vulnerability – the continued failure of companies to share information they have gathered from cyber-attacks. Sharing information gleaned from cyber-attacks is a key to defeating hackers, yet just one-quarter of directors say their company is sharing information externally. This behavior needs to change,” said Gregory Garrett, Leader of International Cybersecurity at BDO USA.

Cyber risk

Almost one in five (18%) board members indicate that their company experienced a data breach during the past two years, a percentage very similar to the previous two years (22%).

A majority (61%) of corporate directors say their company has a cyber-breach/incident response plan in place, compared to 16% who do not have a plan and close to 23% who are not sure whether they have such a plan. Those with plans is approximately the same percentage as a year ago (63%), but a major improvement from 2015 when 45% of directors reported having them.

79% of public company board members report that their board is more involved with cybersecurity than it was 12 months ago. The vast majority of directors (91%) are briefed on cybersecurity at least once a year – this includes 28% that are briefed quarterly and better than one-fifth that are briefed twice a year (21%). The balance are briefed annually (36%) or more often than quarterly (6%).

Surprisingly, nine percent of board members say they are still not briefed at all on cybersecurity. However, during the four years of the survey, the percentage of directors reporting no cybersecurity briefings has dropped consistently.

company directors cybersecurity

Lack of sharing

Sharing information gleaned from cyber-attacks is key to defeating hackers and the U.S. government has consistently communicated how businesses can contact relevant federal agencies about cyber incidents they experience.

Unfortunately, when asked whether they share information they gather from cyber-attacks, only 25% of directors – virtually unchanged from 2016 (27%) – say they share the information externally. A similar proportion (24%) say they do not share the information with anyone and 51% aren’t sure whether they do or not.

Of those sharing information on their cyber-attacks, the vast majority (86%) share with government agencies (FBI, Dept. of Homeland Security) and 47% share with ISAC (Information Sharing & Analysis Centers). Very few (8%) share with competitors.


from Help Net Security http://ift.tt/2k666lI

New Internet Explorer Bug

There's a newly discovered bug in Internet Explorer that allows any currently visited website to learn the contents of the address bar when the user hits enter. This feels important; the site I am at now has no business knowing where I go next.


from Schneier on Security http://ift.tt/2xJfcJY

DHS expanding surveillance of immigrants to social media


Perhaps if George Orwell were writing today, he’d include an addendum to his iconic line, “Big Brother is watching you.” Because, if you’re a US immigrant, Big Brother is serving notice that he will also be reading you – reading your social media posts and tracking your online activities.

The notice came with the Department of Homeland Security’s (DHS) publication of a new rule to “modify a current DHS system of records.” And the key phrase in a document of nearly 9000 words is that the department intends to:

expand the categories of records to include … social media handles, aliases, associated identifiable information, and search results.

It also vastly expands the number of people to which the new rule would apply, from those seeking immigrant status to include naturalized citizens and legal permanent residents. And, as security guru and IBM Resilient CTO Bruce Schneier put it in a blog post, “it seems to also include US citizens (who) communicate with immigrants.”

The rule is set to take effect in less than a month – 18 October – which leaves a pretty small window for privacy advocates to comment. But there is escalating debate not only about the impending change itself, but whether it is a change at all.

According to DHS itself, it is not a change – it simply restates a policy that has been in effect for more than five years. Gizmodo reported that an email from DHS stated:

The notice did not announce a new policy. The notice simply reiterated existing DHS policy regarding the use of social media. In particular, USCIS (US Citizenship and Immigration Services) follows DHS Directive 110-01 for the Operational Use of Social Media. This policy is available on DHS’s public website and was signed on 6/8/2012.

But privacy advocates contend that while past policy has allowed trained USCIS officers to search publicly available social media to see if a person is eligible for an immigration benefit, it hasn’t applied to legal residents or naturalized citizens, and didn’t demand things like their social media handles and aliases, or to search their internet history.

A former DHS senior official, who declined to be identified because of a current employment situation, agreed that the policy does not go back to 2012. Social media collection by immigration began, she said, after the December 2015 shooting in San Bernardino by a husband and wife who killed 14 and wounded 22 before they were later killed in a shootout.

She noted that the new rule, while it does include social media handles and aliases, does not include passwords. But, she agreed that it is “a significant expansion.”

“It is not a nothing-burger,” she said.

The continuing stance of DHS – and the FBI and other agencies that are part of the US intelligence community – is that they must have access to social media accounts to be able to find people who are becoming radicalized and/or may be planning or involved in terrorist activity.

Former FBI director James Comey, speaking in Boston earlier this year, said while he loves privacy, the “bargain” necessary to protect both privacy and safety is that, “there is no such thing as absolute privacy.”

But privacy advocates counter that this kind of collection is too invasive, discriminates against immigrants and wouldn’t improve national security anyway.

The American Civil Liberties Union (ACLU) issued a statement saying the rule would, “single out a huge group of people to maintain files on what they say (and) have a chilling effect on the free speech that’s expressed every day on social media.” It added that the, “collect-it-all approach is ineffective to protect national security …”

Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation (EFF), noting that the information collected will be held, probably indefinitely, in so-called “Alien Files” or “A-Files,” called it, “a new form of invasive social media surveillance.”

It is especially troubling, he added, when added to, “all manner of high tech surveillance, including facial recognition and cell site simulators,” used to monitor immigrants.

The former DHS official added that US immigration is not equipped to do this kind of screening effectively. “Now that they’re collecting it, they’re going to be responsible for it,” she said, “and they don’t have the bandwidth to screen and clear every malcontent who tweets something. It’s classic security theater.”

She said DHS should limit itself to collecting social media information, “when they have probable cause or reasonable suspicion. But not everybody from students to green card holders. They’re putting a bigger target on themselves when they miss something – and they will.”

All of which means, as privacy experts have long said, that the best way individuals can limit this collection is to place some limits on themselves. Which would mean modifying the classic political advice from a Boston politico decades ago: “Never write when you can speak, never speak when you can nod, never nod when you can wink.”

Disgraced former New York governor Eliot Spitzer added to that nine years ago, after he had left a trail of online communications implicating himself in a prostitution ring: “Never put it in email.”

To which, he would probably now add: “Never put it on social media.”


from Naked Security http://ift.tt/2xBaS09

Android unlock patterns are too easy to guess, stop using them


Let’s start with some things we knew already: people are really bad at creating and remembering secure passwords and PINs.

We’re also bad at choosing and answering password recovery questions. Most of us can’t even cook up an unlock pattern for our Androids that’s not crazy easy to predict, be it by shoulder-surfing or the tell-tale streaks we leave with our greasy fingers.

Now, a new report (PDF) from security researchers at the US Naval Academy and the University of Maryland Baltimore County has quantified just how absurdly easy it is to do an over-the-shoulder glance that accurately susses out an Android unlock pattern.

As we explained a few years ago, a lockscreen pattern allows you to lock/unlock your device by swiping your finger on the screen, drawing a pattern that touches at least four and up to nine nodes. Just as with character counts in a passcode, the more nodes you touch in your pattern, the more secure your lock should be.

Unfortunately, while there are 389,112 possible patterns you could draw using four to nine nodes, when researcher Marte Løge analyzed 3400 user-selected patterns, she found that the most commonly selected patterns used just four.

That’s bad enough, but to make it even worse, most people do swipes in predictable patterns: they go from left to right, top to bottom, typically starting in a corner, often create patterns in the shape of a letter, and rarely backtrack over the space their fingers have already traversed.

That’s what we already knew.

What the Naval Academy/U of Baltimore security researchers did this time around was to form a baseline of exactly how easy it is for a snoop to reproduce our unlock patterns, and how much easier it is to glean a pattern vs a PIN.

In a nutshell: it is far easier for an attacker to shoulder surf a pattern than a PIN.

The large-scale study involved showing participants videos of phone users inputting PINs and unlock patterns, and then asking them to act as attackers by replicating what they’d seen.

No surprise here: They found that the longer (6-node) PINs are fairly tough to shoulder surf at first blush. Only about 10% of the “attackers” who took a single look at the video of a 6-character PIN got it right. That went up to about one in four with multiple viewings of the same video.

Compared to that, Android patterns that used 6 nodes were a breeze for the attackers. Their attack success rate was 64% with a single viewing of the video—a success rate that shot up to 80% with multiple views.

Naval Academy Professor Adam Aviv told Wired that it’s easier for humans to detect patterns than PINs because our brains are wired that way:

Patterns are really nice in memorability, but it’s the same as asking people to recall a glyph. Patterns are definitely less secure than PINs.

The researchers accounted for multiple conditions that could affect a shoulder surfing attack, including two common touchscreen sizes; they incorporated 5 different observation angles to simulate various observer vantage points; they considered different hand positions, such as single-handed thumb input vs two-handed index finger input; and they compared varying length PINs and swipe patterns, both with and without the feedback lines.

The researchers noted that disabling Android’s “feedback lines”—those lines that visually trace the pattern in the wake of a swiping finger—cut that attack success rate down to 35% for single viewings and 52% with multiple views. That’s still pretty high, but at least it’s a bit of a bone to throw to those who really, really like their pattern unlocking.

After all, patterns are better than no protection at all. As it is, exhausted users are increasingly just rolling over and playing dead, numbed by alarm fatigue at all the security protocols/security warnings/data getting crowbarred out of companies that can’t seem to figure out how to keep their data safe.

The best approach to securing a phone is to use the longest PIN your it will allow and the shortest lock out time you can stand.

Aviv, along with his fellow researchers, will present the paper at the Annual Computer Security Applications Conference in Puerto Rico in December.



from Naked Security http://ift.tt/2yvIQ2Y

How Apple’s Face ID works, learns, and protects

Apple has unveiled a new version of its privacy page and a paper throwing more light on how Face ID, its newest biometric authentication option, works on iPhone X (“Ten”).

how apple face id works

The former places even more importance on security and privacy features and policies, something that Apple is becoming even more vocal about than before. It’s abundantly clear that Apple believes those things are increasingly becoming an important selling point.

Releasing the latter is a wise decision, and should help users feel more comfortable using Face ID and more likely to use it. After all, it will be the only biometric authentication option on that phone – the Touch ID fingerprint sensor has been removed.

More on Face ID security

The paper delineates many interesting things about Face ID.

For example: It will unlock devices only if it can “confirm attention”, i.e. the user must be looking directly at the screen. The option will be disabled by default if VoiceOver is activated, and can be disabled separately, if the user wishes or needs it.

Face ID also can’t be set up without setting up a passcode. The device will fall back on requesting the passcode if:

  • It has been turned on or restarted
  • It hasn’t been unlocked for more than 48 hours
  • The passcode hasn’t been used to unlock the device in the last six and a half days and Face ID has not unlocked the device in the last 4 hours.
  • It has received a remote lock command
  • After five unsuccessful attempts to match a face
  • After initiating power off/Emergency SOS by pressing and holding either volume button and the side button simultaneously for 2 seconds. This is a simple way to disable Face ID quickly and surreptitiously.

Face ID works in conjunction with two neural networks. One has been extensively trained to perform the facial matching required for it to work as intended, the other is trained to spot and resist spoofing defends against attempts to unlock the phone with photos or masks.

The former will not be stymied by hats, scarves, glasses, contact lenses, and sunglasses, as well as different lighting conditions (including total darkness). Face ID has a way to keep pace with natural changes (ageing, facial hair, makeup) by augmenting its stored mathematical representation of the users’ face.

Apparently, the probability that a random person will be able to unlock a user’s phone through Face ID is approximately 1 in a million (twins and siblings could have better luck). On the other hand, Face ID might not be the best option for children under the age of 13, as their distinct facial features may not have fully developed.

Apple is not collecting data

No specific information has been offered about how the anti-spoofing neural network works, but the company made sure to point out that using Face ID doesn’t mean that Apple will collect photos of users’ faces.

The photos taken during enrollment are not sent to Apple. They are used to create mathematical representations of the user’s face, and are saved only on the device.

“The neural networks may be updated over time. To avoid a user having to re-enroll to Face ID when these neural network changes are made, iPhone X will be able to automatically run stored enrollment images through the updated neural network,” the company noted.

“In addition to being encrypted and protected by the Secure Enclave, these enrollment images are cropped to your face, minimizing the amount of background information. Face images captured during normal unlock operations aren’t saved, but are instead immediately discarded once the mathematical representation is calculated for comparison to the enrolled Face ID data.”

Apple has made it so that users who want to send Face ID diagnostic data to AppleCare will have to explicitly confirm their wish to do so, and will be able to choose which data will be uploaded and which not. All that data will afterwards be deleted from the device.


from Help Net Security http://ift.tt/2wlzfex

Department of Homeland Security to Collect Social Media of Immigrants and Citizens

New rules give the DHS permission to collect "social media handles, aliases, associated identifiable information, and search results" as part of people's immigration file. The Federal Register has the details, which seems to also include US citizens that communicate with immigrants.

This is part of the general trend to srcrutinize people coming into the US more, but it's hard to get too worked up about the DHS accessing publicly available information. More disturbing is the trend of occasonally asking for social media passwords at the border.


from Schneier on Security http://ift.tt/2xN1N3R

The sorry state of stock trading mobile app security revealed


Remember how mobile banking apps got raked over the coals for, among other security lapses, not checking security certificates?

Raked over the coals, as in, repeatedly, in 2013 and again in 2015.

Well, the bad money-handling apple hasn’t fallen very far from the we-don’t-need-no-certificate-validation tree. The only difference is that this time around, it’s stock-checking apps that are asleep at the wheel, using HTTPS without bothering to validate security certificates, or even using HTTP and sending your passwords and other data around in plain text.

A recap of what’s led up to the still-sorry state of mobile financial apps:

Back in the Dark Ages – that would be 2013 – we were pretty appalled when IOActive reported that 40% of iOS banking apps blindly accepted any old TLS certificate for secure HTTP (HTTPS) traffic, with no validation whatsoever.

When you engage in a secure connection using HTTPS you’re given a public key by the system you’re connecting to and that key is signed by a digital certificate that identifies them. Anyone can create a certificate but unless the details in it have been vouched for by a CA (Certificate Authority) it’s deemed untrustworthy.

If apps don’t bother to check if a CA has vouched for a certificate then all bets are off. Any certificate could be presented, by anybody, without setting off any alarms.

A banking app could be misdirected to a phishing site, perhaps by a bogus Wi-Fi hotspot, and you’d be none the wiser. Your mobile browser wouldn’t tell you to back out of the untrusted site and you’d be left high and dry, handing over your banking details to a crook.

Ah, those kooky 2014 banking apps! Those were the days. The painful days.

It had to get better, right? And it did, at least a little.

By December 2015, when IOActive redid the study, it found that the initial 40% of iOS banking apps that weren’t validating certificates had shrunk to “only” 12.5%.

So yes, it got better, but it still wasn’t great: those iOS banking apps were still committing a laundry list of security sins that left many of them vulnerable to things like JavaScript injections, as well as leaking user activity and the back and forth interactions between client and server – all of which should be kept locked away from prying eyes.

It’s not just financial apps that get HTTPS wrong though.

Other apps that fumble HTTPS have included Pinterest’s iOS app and Microsoft’s iOS Yammer client, both of which failed to give warnings about fake certificates when Dutch security company Securify checked them out in April 2015.

Anyway, fast forward to the current time, and IOActive has taken yet another look at mobile apps that handle our money. This time, it looked at stock-checking apps that use HTTPS but that, deja vu, don’t check the SSL certificate.

…and/or that send passwords in clear text… and/or that expose trading and account information… and/or send sensitive data to log files… and/or fail to encrypt data.

In fact, IOActive’s Alejandro Hernández says that the security of mobile trading apps – he looked at 21 of the most popular Android and iOS apps – is far worse than the banking apps the company’s looked at in the past:

The results proved to be much worse than those for personal banking apps in 2013 and 2015. Cybersecurity has not been on the radar of the [financial technology] space in charge of developing trading apps. Security researchers have disregarded these apps as well, probably because of a lack of understanding of money markets.

The new flavors of appalling that arose from testing 14 security controls in the trading apps included these findings:

  • 62% of Android and iOS apps failed to validate SSL certificates.
  • 62% of Android and iOS apps left sensitive data in the logging console.
  • 67% of Android and iOS apps failed to securely store data.
  • 62% of Android apps contained hardcoded secrets.
  • 95% of Android apps didn’t detect if they were running on a rooted device.
  • 95% of iOS apps didn’t support privacy mode.

There’s another blast from the past in this recent research too. Most of the trading apps don’t have two-factor authentication (2FA), just like the banking apps in the 2013 and 2015 analyses.

When we reported on the banking apps in 2013, Naked Security’s Paul Ducklin pointed out that all the cool kids offer 2FA: Facebook, Twitter, Google et al.

The extra security provided by 2FA is obvious: crooks who steal or guess your password are out of luck unless they also steal your mobile phone, without which they won’t receive the additional codes they need to log in each time.

Hernández has disclosed his findings responsibly, he says, reporting them to 13 of the brokerage firms whose trading apps harboured the higher risks vulnerabilities. Only two responded.

So how can we get mobile apps to improve without people like Hernández having to pop them open, gasp in horror and write lengthy reports first? He has a suggestion:

…there are rating organizations that score online brokers on a scale of 1 to 5 stars. I glimpsed at two recent reports and didn’t find anything related to security or privacy in their reviews. Nowadays, with the frequent cyberattacks in the financial industry, I think these organizations should give accolades or at least mention the security mechanisms the evaluated trading platforms implement in their reviews.

For now, improvement rests in the hands of the brokerage firms and app developers who need to up their games.

You can mitigate some of the problems IOActive uncovered by using a VPN if you’re trading from coffee shops, airports or anywhere else with public Wi-Fi. Most of the security issues mentioned here are invisible though, with the exception of 2FA. If it isn’t a feature of a trading app you want to use you can send a message by walking away.



from Naked Security http://ift.tt/2xAlxIk

Laying the foundation for a proactive SOC

Most companies are trying to shift their Security Operations Center (SOC) from a reactive to a proactive posture.

To do that, the analysts’ reaction to security events must become swift, and investigation of security alerts and incidents must become more efficient. Once high effectiveness is achieved, the analysts can concentrate even more on hunting and detecting threats within the network before they become a problem.

Based on the interest our recent article on getting a start on cyber threat hunting has garnered, we’ve decided to do a deeper dive into Sqrrl Enterprise, and the concrete value it can bring to alert and incident investigators.

A force multiplier

A successful information security investigation depends on the evidence that can be gathered, so robust and diverse evidence sources (logs, SIEMs, etc.) are a must. But they won’t be of much help if the evidence cannot be easily searched and reviewed.

proactive SOC

Sqrrl’s Security Behavior Graph makes hunting and investigations simpler and more effective, a “force multiplier”

Sqrrl’s visual Security Behavior Graph, which is based on a linked data model that displays user and entity activity on a network and the relationships between them, is what makes this solution an effective force multiplier:

1. It allows analysts to address 3 or 4 events in a single investigation, and provides unmatched context to quickly and clearly determine whether an alert is a false positive or not. This simplifies their work and saves their time, as well as that of the CSIRT who would otherwise waste it on investigating alerts that should not have been escalated.

To obtain sufficient clarity, analysts must be able to investigate an alert and determine both the impact that a potential incident could have and the confidence with which it was generated. This includes assessing the state of their IT infrastructure and gathering additional data about endpoints, applications, and network traffic. Without the right tools, this process is extremely complicated and time consuming.

2. It eliminates time consuming processes to manually fetch data, normalize, and correlate it, so that analysts can answer the following questions: “What other assets and what other activities were involved in the attack?” This dramatically improves alert resolution times.

3. Finally, it gives analysts a full picture of all the attack – tactics, techniques and procedures used – 20 times faster.

Information is just a few clicks away

Sqrrl is designed to simplify various types of hunts for both experienced and inexperienced analysts, and offers the same help to analysts tasked with incident response.

For example: You discovered that a host on your network has communicated with an IP address associated with a recent malware campaign, and you want to determine the nature of the communication. Unfortunately, it’s encrypted. So what other data points do you have available to pivot to from the IP address you have?

proactive SOC

Custom Risk Trigger to add risk to an entity (IP address) match with Intel list

More experienced analysts could offer a few answers to that question, but those that are less familiar might not know them all.

proactive SOC

Starting with just the IP address in a static view we can see all connections and associated entity relationships

proactive SOC

The we can go to explorer view and expand detection on the IP. I highlighted a malicious domain to show case the intel hit, but I did not draw an arrow to point at it

With Sqrrl, the answers are right there – the platform provides all the pivots that are available for a specific input. Not only that: all the pivots have a sense of direction, which adds context to them and ultimately helps analysts answer questions that will lead them towards meaningful conclusions.

Other questions Sqrrl can answer quickly is whether the data they are analyzing shows up in any of the other data sources available to the analyst, whether a suspicious looking domain name resolved by a friendly host communicated with the same IP address that is now associated with it, and so on. No more submitting a string of queries to get to the information you need – a few clicks will suffice now.

The same process length is required to get at the “real story” behind each piece of evidence.

For example: You investigated a suspicious domain and found no sign of malicious activity from the associate IP address. But, did the domain at that time have the same IP address as it has now? Getting this information from a traditional SIEM requires making large queries to multiple data sources. Sqrrl can answer that question after a few clicks, and you can continue the investigation by checking out the other IP addresses that were associated with it when one of your hosts contacted it.

Similar investigations can be made to discover whether a real owner of an account is behind the username authenticating to a system, or whether malicious attachments with different names are actually the same file. With Sqrrl, expanding the scope of investigations takes a lot less effort and time than before, which also means it is more likely to happen.

A similarly easy-to-perform expansion can be made to see whether a suspicious or malicious indicator can be found on other devices. Such a search can, for example, uncover infected hosts that simply haven’t yet executed the malicious file.

proactive SOC

Here I just selected skype.exe, which doesn’t have to be malicious, entered explorer mode and expanded out connection resolutions to the hosts. I removed one host that did not resolve to an IP address.

Other variations of these attack scope expansions can include searching through a broader array of data sources, a larger time span, or a larger array of the attack surface area (e.g. more hosts). Sqrrl effectively facilitates the collection of these puzzle pieces strewn across the company’s networks and assets.

Improved defense

How often your analysts will be able to turn to threat hunting instead of always being two steps behind the attackers depends a lot on the tools you give them.

With the right tools, they’ll be able to breeze through the alerts and evidence and make critical assessments and decisions much more quickly. They will be able to cover much more ground than before, while expending less effort and time.

Such an increase in efficiency can only be good for both the organization and the analysts themselves: they will be able to dedicate more time to improve their threat hunting knowledge and capabilities, and use them to become even better defenders.


from Help Net Security http://ift.tt/2xDpGJk

Is this the year SIEM goes over the cliff?

siem relevancyWhile this may not be the year that Security Information and Event Management (SIEM) solutions fall off of the cliff of relevancy into obsolete software land, they are slowly moving closer to the edge.

Initially, SIEM solutions sought to solve the collection, monitoring, analyzing, and identification of threats in the cybersecurity environment. Bogged by time intensive needs and requiring large data infrastructure to house massive amounts of information, the downward spiral of SIEM may be stayed with new security analytic enhancements boosting network visibilities and efficiencies—at least for the time being.

Origin story

In the early 2000s, cybersecurity at most large enterprises consisted of network-based firewalls and antivirus software on local desktops and servers. Then came intrusion detection systems, driven in the large part by widespread industry adoption. While intrusion detection systems did help identify suspicious traffic, they also generated vast quantities of alerts, requiring countless hours of fine-tuning sensors to weed out the signals from the noise.

To address this issue, SIEM solutions were designed and deployed in large organizations with a simple goal: take the many security alerts, distill them into actionable events and add vulnerability management information to provide context. Through the use of aggregation (grouping similar alerts occurring simultaneously into a single event) and correlation (grouping events with similar characteristics into a single event) capabilities, SIEM initially reduced alert clutter and saved analysts’ precious time. The vulnerability information provided enough context to determine if the device in question was indeed susceptible to the potential attack.

Drowning in data

A SIEM solution ingests data from multiple sources, resulting in cumbersome contracts for vendors, users and enterprises with the need for a plethora of servers to store all the data and provide access and availability. Ideally, the more sources you can point at the SIEM, the more efficient and effective your security team could become. Unfortunately, as evidenced by Netwrix’s 2016 SIEM efficiency survey, 81 percent of respondents believe that SIEM reports contain too much data and too little actionable insight.

For large enterprises that generate terabytes of data every month, the applications of a typical SIEM solution can fracture analyst’s time as organizations find themselves deploying log management solutions to offload some of the data collection, processing and analysis from the SIEM for specific functions. This counter effective move only adds to an organization’s technology debt and solution fatigue instead of alleviating workload.

Making do

With existing massive investments in SIEM and the large amount of data already stored, it is difficult to just rid your enterprise of these types of solutions. But how do you continue to solicit value? The answer: next generation SIEM.

According to a recent Forrester report on security analytics platforms, the burgeoning security analytics market can provide a solution to bringing SIEMs back from the cliff while extracting value. This modern field provides the ability to keep up with compliance mandates around log management and reporting in addition to monitoring and alerting capabilities. While potentially compounding technology debt, next gen SIEM as evolving into security analytics are lending three added features including:

1. Network Analysis and Visibility – This comprehensive category boosts networks analysis and visibility function by using network discovery, flow data, metadata and packet capture analysis, as well as forensic tools.

2. Behavior Analytics – To suss out malicious users and to garner a better understanding of user behaviors, security user behavior analytics is a newer capability differentiating cutting edge solutions in the market.

3. Big Data Infrastructure – In order to handle the massive volume of events and process multiple data sources, security analytics are looking to platforms that can handle big data infrastructure at scale.

In short

Security analytics enhancements don’t solve traditional SIEM issues but offer a great step forward. Until a single platform can perform all necessary functions beyond detection with speed, value adds will be incremental. While some enterprises have taken to creating their own cyber data lakes to address SIEM shortages and add needed analytics, managing and extracting value from this scenario requires an army.

Instead, when troubleshooting your SIEM, look for tighter network integrations between cloud and endpoint solutions with a mind for prevention and mitigation. While this might not be the year SIEM goes over the cliff, it’s getting close. Meanwhile, security analytics are offering a saving grace.


from Help Net Security http://ift.tt/2wYKicV