Thursday, August 31, 2017

What’s the use of a privacy policy?

use privacy policyIn 2012 it was reported that “16% of Internet users claim to always read privacy policies of the sites and online services with which they share their private information”.

I would probably challenge that figure since I would have anticipated a decimal point to the left, but let us assume that it is 16%. Well for those of you in that group, this article is probably for you.

Buying any technology that is considered ‘smart’ is a painful exercise. It is not a simple task of reviewing the features on offer, but very carefully reading the privacy policy, scouring for any research that reveal privacy and security concerns, and that is just the first week of research.

Once the “personal due diligence” is concluded you would anticipate that you can sit back and enjoy the service by such technology? However, a recent story that went somewhat unnoticed amongst the furore of what is the daily diet of infosec stories was the announcement from one technology vendor that they will be updating their privacy policy.

So what?

Well recent reports that Sonos will be updating their privacy policy is no big deal right? Well per a company spokesperson: “The customer can choose to acknowledge the policy, or can accept that over time their product may cease to function.” For those of you in the 16% you understand what that means right? According to their FAQ you can of course request for your personal data to be deleted, but again your device will stop working.

We live in a world in which personal data as the new oil is the first slide of any presentation related to privacy, so chances are most technology companies understand this. Subsequently the evolution of new business models means that we will begin to see more examples where more data is processed beyond their initial specified purpose (for those of you in the UK that are familiar with the Data Protection Act may like that phrase!).

If you or I think this is unreasonable then tough luck – write off the investment, get a time machine, go into Our Price and buy your 7″ single to play on your record player.

Forcing loyal customers into the acceptance of new business models is particularly unfair, and whilst normally the advice is to do your homework, in these cases it is not really an option. I’m sorry, no upbeat ending this time, I value my personal data, but it seems so does the bottom line of the vendors I trust into my home.


from Help Net Security http://ift.tt/2wWk7Z3

New infosec products of the week​: September 1, 2017

Palo Alto Networks announces Next-Generation Security Platform for VMware Cloud on AWS

Palo Alto Networks announced its Next-Generation Security Platform is available to customers of VMware Cloud on AWS. It allows customers to protect their on-premise, private and public cloud presence with next-generation security features that deliver visibility, control and threat prevention at the application level. This enables customers to securely migrate applications and data from their software-defined data center into VMware Cloud on AWS.

infosec products week september 2017

New WatchGuard Wi-Fi AP delivers performance and security for high-density wireless networks

WatchGuard Technologies announced the AP420, a new 802.11ac Wave 2 AP that provides the power and speed needed to support throughput-intensive and latency-sensitive applications like VoIP, video, music, and large data file transfers over Wi-Fi. It also includes a 4×4 Multi-User MIMO (MU-MIMO) dual radio that offers high client-density support to help eliminate Wi-Fi connection delays, and a third radio for dedicated Wireless Intrusion Prevention System (WIPS) and RF optimization scanning.

infosec products week september 2017

Bitdefender delivers security for software-defined datacenters

Bitdefender’s security platform, GravityZone, is designed to simplify security operations and minimize impact on infrastructure resources, while delivering adaptive layered next-generation defenses. The platform enables centralized security manageability for physical, virtual on-premise and cloud machines, maximizing virtualization density to reduce infrastructure costs and minimizing latency to improve user experience.

infosec products week september 2017

Great Bay Software delivers IoT security enhancements

Great Bay Software introduced Beacon Product Suite 5.2. Beacon’s artificial intelligence expert system-based engine collects and correlates information from dozens of data sources. As a result, Beacon discovers every device – IoT, smart and unmanaged – within seconds of connecting to the network. It provides complete visibility and maintains a rich historical device database in its Warehouse of Context.

infosec products week september 2017

Tufin Orchestration Suite R17-2 automates critical firewall tasks

Tufin Orchestration Suite R17-2 includes automation for firewall administration tasks. The release also provides new features advancing network security policy management of Cisco Firepower, VMware NSX, Microsoft Azure, Check Point R80.10, and Palo Alto Networks Panorama solutions.

infosec products week september 2017

CloudBees launches CloudBees DevOptics

CloudBees DevOptics aggregates live data from software pipelines to help derive essential metrics and insights into a holistic view of application delivery. It creates context between teams, applications and tools to identify ROI, improvements and increase collaboration.

infosec products week september 2017


from Help Net Security http://ift.tt/2wn0pnt

Whitepaper: Understanding pulse wave DDoS attacks

Pulse wave DDoS is a new attack tactic, designed to double the botnet’s output and exploit soft spots in “appliance first cloud second” hybrid mitigation solutions.

understanding pulse wave DDoS attacks

Comprised of a series of short-lived bursts occurring in clockwork-like succession, pulse wave assaults accounted for some of the most ferocious DDoS attacks we ever mitigated.

Reading this whitepaper will help you:

  • Understand the nature of pulse wave DDoS attacks
  • See how they are used to pin down multiple targets
  • Discover the soft spots these assaults can exploit
  • Learn about other attacks that occur in short bursts.


from Help Net Security http://ift.tt/2wrvFA7

Pacemaker gets firmware update – go and see your doctor


When a cardiac pacemaker or defibrillator is implanted into a patient, thin, flexible wires called leads are attached to deliver electric shock from the pulse generator directly to the heart.

Those leads sometimes fail. Sometimes, they get infected. Other times, they’re recalled. Removal involves surgery – a complex, delicate procedure that risks damage to the heart tissue.

So what happens when the manufacturer of an internet-connected, radio frequency (RF)-enabled pacemaker finally, begrudgingly stops fighting and litigating over potentially life-threatening attacks and issues a firmware fix for its pacemakers?

Fortunately, it’s not open heart surgery, though it will entail an in-person trip to a healthcare provider’s office.

Abbott (formerly St. Jude Medical) fixed the software side of the security vulnerabilities in January. Now, on Monday, it got to the vulnerabilities in the devices themselves.

In a Dear Doctor letter, Abbott described the firmware update as a three-minute process, during which the pacemaker will operate in backup mode, pacing at 67 beats per minute.

Essential, life-sustaining features will remain available. At the completion of the update, the device will return to its pre-update settings.

Abbott said that with any firmware update, there’s always a (low) risk of an update glitch. Based on the company’s previous firmware update experience, installing the updated firmware could potentially result in the following malfunctions, with the tiny rates of occurrence that St. Jude Medical has previously observed:

  • 0.161% chance of reloading of previous firmware version due to incomplete update
  • 0.023% chance of loss of currently programmed device settings
  • 0% (as in, none have been reported on other firmware upgrades) loss of diagnostic data
  • 0.003% chance of complete loss of device functionality

That last one may seem like a vanishingly small potential, but it’s a dire one. Pacemaker failure has two outcomes, depending on how well the patient’s heart works: you get sick, or you die.

But fortunately, that tiny chance of pacemaker failure will likely be smaller still, given that both Abbott and the US Food and Drug Administration (FDA) say they’re not recommending prophylactic removal and replacement of affected devices.

Here’s the list of St. Jude’s/Abbott’s affected implantable cardiac pacemakers, including cardiac resynchronization therapy pacemaker (CRT-P) devices:

  • Accent
  • Anthem
  • Accent MRI
  • Accent ST
  • Assurity
  • Allure

We’re talking about a total of 465,000 implanted devices that are affected by the firmware flaws, which leave the devices vulnerable to tampering that could cause them to pace at potentially dangerous rates or fail by rapidly draining their batteries.

In January, St. Jude had announced security updates for its Merlin remote monitoring system, which is used with implantable pacemakers and defibrillator devices.

The fixes were designed to reduce what St. Jude claimed to be extremely low cyber-security risks.

At the time, the pacemaker company said it was unaware of any security incidents related to, nor any attacks explicitly targeting, its devices. The same was true as of this week: there have been no known security incidents.

Well, that’s a blessing. Still, that January software update addressed some, but not all, known cyber-security problems in the heart devices. The holes left in place by the incomplete fix were those in the firmware. They were deemed to be pretty serious: Matthew Green, an assistant professor at John Hopkins University, described the pacemaker vulnerability scenario as the fuel of nightmares: for one, weak authentication protocol left the devices open to commands sent via RF, from a distance, leaving no trace, by anybody who knows the protocol (including home devices).

After installing the update that Abbott made available on Tuesday, any device attempting to communicate with the implanted pacemaker would have to provide authorization – received from the Merlin Programmer and Merlin@home Transmitter – to do so.

Pacemakers manufactured from August 28 2017 will have this update pre-loaded in the device and won’t need the update.

Abbott and the FDA are recommending that doctors discuss the risks and benefits of the vulnerabilities and the firmware update with their patients at their next regularly scheduled visit. They’re saying that it’s important to consider factors such as each patient’s level of pacemaker dependence, the age of the device, and patient preference.

Their suggestions:

  • For pacing-dependent patients, consider performing the firmware update in a facility where temporary pacing and pacemaker generator can be readily provided.
  • Print or digitally store the programmed device settings and the diagnostic data in case of loss during the update.
  • After the update, confirm that the device maintains its functionality, is not in backup mode, and that the programmed parameters have not changed.



from Naked Security http://ift.tt/2grDAcS

Beware scammers phishing for disaster charity – or anything else


It has long been obvious – or should be – that phishing criminals are like looters: they are good at spotting crimes of opportunity.

And there has been considerable high-profile opportunity lately, in the form of a natural disaster and a big-money lottery win. The seemingly endless rains (maybe not 40 days and 40 nights, but 40-plus inches) in Texas from Hurricane Harvey (pictured) have, predictably, opened the hearts and wallets of people throughout the country and beyond, hoping to help offset some of the damage and suffering from catastrophic flooding.

So that, also predictably, has drawn the cyber underbelly – scammers – looking to exploit that generosity. US CERT (United States Computer Emergency Readiness Team) issued a “Potential Hurricane Harvey Phishing Scams” notice this week, warning people to

… remain vigilant for malicious cyber activity seeking to capitalize on interest in Hurricane Harvey … even if it appears to originate from a trusted source. Fraudulent emails will often contain links or attachments that direct users to phishing or malware-infected websites.

Indeed, the risk is much more than “potential”. The scams are up, spreading like kudzu. Fortune reported “several suspicious online profiles and personas that, although their legitimacy couldn’t be determined, raised several red flags: a small number of followers, unverified accounts, no apparent links to accredited charities, and no means to track where proceeds go”.

Security researcher Perry Carpenter warned about Facebook pages supposedly dedicated to victim relief that contain links to scam websites; tweets with links that claim to lead to charitable websites that are actually spam links or lead to a malware infection; and phishing emails asking for donations to a “#HurricaneHarvey Relief Fund”.

One might think such scams would be obvious. But they sprout overnight online because, says Carpenter:

 … they still work. With a circumstance like Hurricane Harvey, so many people truly want to help others in need. Scammers use that vulnerability and empathy to prey upon the human spirit.

But it is not just disasters that bring scammers out of the woodwork. Mavis Wanczyk’s good fortune has done it as well. The 53-year-old Chicopee, Mass. resident and (former) hospital worker recently won one of the biggest Powerball jackpots in history, at $758m.

And now there are dozens of “Mavises” on social media, offering people some of that cash in exchange for some of their personal information – you know, “she” would need to know your bank info so she can deposit the money in your account.

These are apparently more credible than the emails from the Nigerian princess who addresses you as “Dear One”, and then offers a few million bucks if you send her some info, because the Boston Globe reported this week that police in Chicopee had issued a warning on Facebook:

PLEASE do not fall for these scams. DO NOT give out any personal information to these accounts. Do not fall victim to a scammer by releasing ANY of your information.

The Globe reported that a quick social media scan produced more than a dozen Facebook accounts using Wanczyk’s name and photo – one with 3,000 likes and purported messages from her – plus another 13 Twitter accounts, “using photos of Wanczyk, or the giant lottery check she received, claiming to be her”.

None of this is new, of course, nor is the fairly foolproof advice on how to avoid becoming a victim. The most important of which is: NEVER click on a link in an email or social media post unless you are absolutely sure it is from someone you know and trust. Do not click on “click to donate” unless you’re sure it’s a reputable site.

It is laudable, and possible, to donate safely to worthy causes: the way to do that is to go to the website of a credible charity.

Along with that, the recent US CERT notice has a list of recommendations with links (they’re good ones – we checked) to other helpful information, some of it specifically aimed at hurricane relief. They include:

“You don’t have to sacrifice your humanity and sympathy for the sake of security,” Carpenter said. “Act. Give. Help. But do so in a wise and informed manner.”



from Naked Security http://ift.tt/2vv5WK6

Stealthy backdoor used to spy on diplomats across Europe

A new, sophisticated backdoor Trojan has been used to spy on targets in embassies and consulates across Southeastern Europe and former Soviet Union republics.

ESET researchers have analyzed and documented the Trojan, which they dubbed Gazer, and are highly confident that it is being used by the Turla cyberespionage group.

spy diplomats Europe

The Gazer backdoor and ties to Turla

The researchers have analyzed different Gazer samples and have identified four versions of the malware. Some of the samples were signed with legitimate certificates.

Gazer shares several similarities with other malware (Carbon, Kazuar) used by the Turla APT: it can receive encrypted tasks from a C&C server, uses an encrypted container to store its components and configuration, and logs its actions into encrypted logfiles.

The malware seems to have been in use since 2016, leveraged in targeted attacks against embassies and consulates (Turla’s usual targets) but this is the first time that the malware has been documented.

Gazer flew under the security’s industry radar for a some time. Part of the reason is that the authors used custom encryption (their own library for 3DES and RSA).

“As usual, the Turla APT group makes an extra effort to avoid detection by wiping files securely, changing the strings and randomizing what could be simple markers through the different backdoor versions. In the most recent version we have found, Gazer authors modified most of the strings and inserted ‘video-game-related’ sentences throughout the code,” they noted.

“The witnessed techniques, tactics and procedures (TTPs) are in-line with what we usually see in Turla’s operation: a first stage backdoor, such as Skipper, likely delivered through spearphishing followed by the appearance on the compromised system of a second stage backdoor, Gazer in this case.”

They have provided technical details, indicators of compromise and Yara rules that can be used to flag known variants of the threat.


from Help Net Security http://ift.tt/2wV5bdC

Machine learning for malware: what could possibly go wrong?


Security vendors – Sophos included – continue touting the benefits of machine learning-based malware analysis. But, as we’ve written in recent weeks, it must be managed properly to be effective. The technology can be abused by bad actors and corrupted by poor data entry.

Sophos data scientists spoke about the challenges and remedies at length during Black Hat USA 2017 and BSidesLV, and have continued to do so. The latest example is an article by data scientist Hillary Sanders about the importance of proper labeling.

Sometimes, says Sanders, the labels companies inject into their models is wrong.

Dirty labels, bad results

As she put it, supervised machine learning works like this:

  • Researchers give a model (a function) some data (like some HTML files) and a bunch of associated desired output labels (like 0 and 1 to denote benign and malicious).
  • The model looks at the HTML files, looks at the available labels 0  and 1 and then tries to adjust itself to fit the data so that it can correctly guess output labels (0,1) by only looking at input data (HTML files).
  • Researchers define the ground truth for the model by telling it that “this is the perfectly accurate state of the world, now learn from it so you can accurately guess labels from new data”.

The problem, she says, is when researchers give their models labels that aren’t correct:

Perhaps it’s a new type of malware that our systems have never seen before and hasn’t been flagged properly in our training data. Perhaps it’s a file that the entire security community has cumulatively mislabeled through a snowball effect of copying each other’s classifications. The concern is that our model will fit to this slightly mislabeled data and we’ll end up with a model that predicts incorrect labels.

To top it off, she adds, researchers won’t be able to estimate their errors properly because they’ll be evaluating their model with incorrect labels. The validity of this concern is dependent on a couple of factors:

  • The amount of incorrect labels in a dataset
  • The complexity of the model
  • If incorrect labels are randomly distributed across the data or highly clustered

In the article, Sanders uses plot charts to show examples of when things can go wrong. Those charts are in the “problem with labels” section.

Getting it right

After guiding readers through the examples of what can go wrong, Sanders outlines what her team does to get it right. To minimize the amount and effects of bad labels in their data, the team…

  • Only uses malware samples that have been verified as inherently malicious through sandbox analysis and confirmed by multiple vendors.
  • Tries not to overtrain, and thus overfit, their models. “The goal is to be able to detect never-before-seen malware samples, by looking at similarities between new files and old files, rather than just mimic existing lists of known malware,” she says.
  • Attempts to improve their labels by analyzing false positives and false negatives found during model testing. In other words, she explains, “we take a look at the files that we think our model misclassified (like the red circled file in the plot below), and make sure it actually misclassified them”.

She adds:

What’s really cool is that very often – our labels were wrong, and the model was right. So our models can actually act as a data-cleaning tool.



from Naked Security http://ift.tt/2vuUyOs

Is your email in the latest cache of 711 million pwnd addresses?


It’s never good news to receive an alert from the Have I Been Pwned? (HIBP) project but it’s better to know than not.

Founded by Troy Hunt after the historically embarrassing Adobe breach of 2013, HIBP is a database of breached, scraped and otherwise stolen email accounts that lets anyone check whether theirs is known to be circulating among cybercriminals.

Vast numbers are, and to this total we can now add another 711m, recently discovered by a researcher called Benkow in an unsecured state inside text files on a Netherlands-based server that has been using them to fuel the “Onliner” spambot.

This, HIBP informs me, includes an email address registered to a domain I’ve used for years, the third time the site has spotted it inside a breach cache in four years.

Should I, or anyone else receiving the same email alert from HIBP about this spam list, be worried?

Hunt sums up the cache’s mountainous size:

Just for a sense of scale, that’s almost one address for every single man, woman and child in all of Europe.

It’s true the 711m haul is the largest yet reported by the site, but some of these will have been mentioned in previous breaches, in my case Adobe (152m) and Dropbox in 2012 (68m). Aggregated from different sources, the numbers aren’t cumulative.

HIBP also describes my email address as having been “pwned” in the latest dump although, strictly speaking, it’s the sites that allowed a breach to happen that deserve to be chastised – my failing was to entrust the address to companies that failed to protect it.

More concerning is what these addresses are being used for. Much of the new cache appears to be email addresses, which means that anyone whose address appears within it will be targeted by spam including, in the case of Onliner, the Ursnif banking malware.

Because my email address appeared in previous breaches, that was already the case, so arguably I’m no worse off than before. I’m in good company at least – Hunt spotted an email address used by him mentioned twice in the cache.

Of larger concern might be the group whose passwords are included, including those apparently extracted from unsalted SHA-1 hashes that were part of the 2012 LinkedIn breach whose troubling scale was didn’t come to light until 2016.

Other files contained tens of thousands of email server credentials, including SMTP server and port configuration. Explains Hunt:

Thousands of valid SMTP accounts give the spammer a nice range of mail servers to send their messages from.

Separately, Benkow, the researcher who discovered the cache, estimates a total of 80m credentials of different kinds.

Hunt and Benkow are now trying to have the cache data removed from the site it was found on, which is still up and accessible to anyone who knows where to look. Ironically, whomever was farming this data didn’t devote much effort to keeping it to themselves.

Anyone who thinks they might be affected can check HIBP manually for their email addresses or account name. Anyone anxious about their email server credentials should change the password at the very least before going for a long, calming lie down.

Sometimes it’s better to know what’s really going on even if that knowledge is depressing or troubling. In the case of this cache, it’s that addresses, credentials and personal data have long since become a criminal commodity. This can’t be stopped or reversed, merely contained.

But at least email addresses and credentials can be changed, more than can be said for users whose names, addresses, dates of birth and social security numbers are breached. This cache of breached data looks bad – but it could be so much worse.


 

 


from Naked Security http://ift.tt/2eHlbFf

Journalists Generally Do Not Use Secure Communication

Journalists Generally Do Not Use Secure Communication

This should come as no surprise:

Alas, our findings suggest that secure communications haven't yet attracted mass adoption among journalists. We looked at 2,515 Washington journalists with permanent credentials to cover Congress, and we found only 2.5 percent of them solicit end-to-end encrypted communication via their Twitter bios. That's just 62 out of all the broadcast, newspaper, wire service, and digital reporters. Just 28 list a way to reach them via Signal or another secure messaging app. Only 22 provide a PGP public key, a method that allows sources to send encrypted messages. A paltry seven advertise a secure email address. In an era when anything that can be hacked will be and when the president has declared outright war on the media, this should serve as a frightening wake-up call.

[...]

When journalists don't step up, sources with sensitive information face the burden of using riskier modes of communication to initiate contact­ -- and possibly conduct all of their exchanges­ -- with reporters. It increases their chances of getting caught, putting them in danger of losing their job or facing prosecution. It's burden enough to make them think twice about whistleblowing.

I forgive them for not using secure e-mail. It's hard to use and confusing. But secure messaging is easy.

Posted on August 31, 2017 at 6:52 AM • 0 Comments


from Schneier on Security http://ift.tt/2vIfoFS

Attackers exploited Instagram API bug to access users’ contact info

Instagram has confirmed that “one or more individuals obtained unlawful access to a number of high-profile Instagram users’ contact information — specifically email address and phone number — by exploiting a bug in an Instagram API.”

attackers exploited Instagram API bug

Apparently, no account passwords were exposed.

No more details about the bug were shared, only that it has now been fixed. They also didn’t say whether the bug affected only verified or all types of Instagram accounts, or whether the stolen information was used to compromise verified accounts.

American singer and actor Selena Gomez has recently had her Instagram account hijacked by attackers who went on to post nude photos of former boyfriend Justin Bieber, but it is unknown whether that hijack has anything to do with this bug.

The Facebook-owned company has notified verified members of the hack, and has urged users “to be vigilant about the security of their account and exercise caution if they encounter any suspicious activity such as unrecognized incoming calls, texts and emails.”


from Help Net Security http://ift.tt/2xAmopF

People-rating app Sarahah slurps up contacts for feature that doesn’t exist


Many social media apps sink their fangs into users’ devices to suck out their contact lists.

It makes sense. How else would they a) offer to hook you up with people you know and/or b) send a swarm of marketing email to pester your friends?

It’s not only potentially useful; it has the potential to drive your buddies insane with the resulting plague of marketing email, if LinkedIn’s past pestering is any indication.

And now, there’s a problem with the way that the latest viral sensation app, Sarahah, siphons contact lists. Namely, it is quietly sucking up users’ contacts, but it’s not giving them anything in return.

Sarahah, the latest people-rating app, bills itself as a way to “receive honest feedback” from friends and employees… anonymously. How the “anonymous” part of the equation jibes with showing users who else they know on the app is anybody’s guess.

Sarahah claims that on iOS it uses contact data to show users who in their address books are using the app. But according to Zachary Julian, a senior security analyst at Bishop Fox, the app is sucking up contacts without handing over the goods.

Zain al-Abidin Tawfiq, the developer who created Sarahah, said in a Tweet that the feature is in the works:

He also said, in a subsequent tweet, that the Sarahah database is currently empty: it has nary a single contact in it. Tawfiq said that the Find Your Friends feature was delayed “due to a technical issue,” that the database isn’t currently hosting contacts, and that the app’s data request is going to be yanked in the next release.

But there are a few issues with Find Your Friends that Twitter respondents, and Julian, posed to him:

  1. Why didn’t he wait until the feature was ready before gobbling up address books?
  2. Doesn’t Find Your Friend defeat the purpose of an anonymous people-rating app?
  3. Maybe Sarahah has some empty database lying around, but wherever else the data is flowing, the app’s been caught in the act of siphoning.

Some sound like they want to see Tawfiq’s father give him a little bit of “people rating” over the first issue:

Julian has posted a video to show the address book harvesting in action on Android. He notes that the iOS version of the app also contains functionality to send every phone number, email address and associated names on a device to Sarahah’s servers.

As soon as users log into the app, Sarahah attempts to upload all phone and email contacts. On iOS and Android 6+, the operating system will prompt the user before allowing access to the phone’s contacts, but phones running Android 5 and below – and there are a lot of them – won’t be prompted. All they get is the permissions prompt during installation from the Play Store.

Julian:

On Android 5 and below, these requests will be issued silently and without user interaction. With an estimated 54% of users running Android 5 and below, this is probably a substantial amount of Sarahah’s 10 [million] to 50 million Android users.

It’s likely that most users permit access to their contacts without considering how this data may be used.

iOS does a better job at warning users about the data upload, he said, by explicitly prompting whether to allow the application access to the phone’s contacts and giving users a chance to say no.

Why should this trouble us? It’s not as if social media apps didn’t regularly request our contacts. But Julian notes that at this point, we don’t have the feature, and “all we have is the company’s word” that it’s coming.

We can take Tawfiq’s claims at face value — maybe that database is indeed an empty holder, without any contact details, be they phone numbers, names or email addresses.

Otherwise, given tens of millions of installs – Sarahah is a top free downloaded app on iTunes – that means tens of millions of address books harvested.

The thing is, Julian found that Sarahah did indeed upload his private information when he installed the app on his Android phone, a Galaxy S5 running Android 5.1.1. Julian told The Intercept that his phone was outfitted with monitoring software, known as Burp Suite, that intercepts internet traffic entering and leaving the device, allowing the owner to see what data is sent to remote servers.

Sure enough, when Julian launched Sarahah, Burp Suite caught it uploading his private data.

Here’s some non-anonymous, honest feedback: there are many ways for personal data to be revealed, be it through data breaches or from a supposedly anonymous app offering to show users who else is using it.

If Sarahah is struggling with “technical” issues that caused it to prematurely grab data (that just maybe it shouldn’t be grabbing in the first place), should you trust that it will keep your name out of the picture when you give “honest” feedback about your boss?

Honestly? I’ll take a pass.



from Naked Security http://ift.tt/2wUoJil

Patients with St. Jude pacemakers called in for firmware update

Patients using one of several types of implantable radio frequency-enabled pacemakers manufactured by St. Jude Medical will have to visit their healthcare provider to receive a firmware update that fixes several cybersecurity issues.

pacemakers firmware update

The reason why the update can’t be pushed over-the-air through their Merlin@home Transmitter unit is the fact that the update could, in a very small number of cases, lead to a complete loss of the pacemaker’s functionality, the loss of currently programmed device settings, or a reloading of a previous firmware version.

Affected devices

The affected pacemaker and CRT-P devices are those sold by Abbott Laboratories (formerly St. Jude Medical) under the following names: Accent, Anthem, Accent MRI, Accent ST, Assurity, and Allure.

All in all, in the US, some 465,000 devices require the update. It is unknown how many devices have been implanted in patients outside the US.

“The FDA has reviewed information concerning potential cybersecurity vulnerabilities associated with St. Jude Medical’s RF-enabled implantable cardiac pacemakers and has confirmed that these vulnerabilities, if exploited, could allow an unauthorized user (i.e. someone other than the patient’s physician) to access a patient’s device using commercially available equipment,” the US Food and Drug Administration noted.

“This access could be used to modify programming commands to the implanted pacemaker, which could result in patient harm from rapid battery depletion or administration of inappropriate pacing. After installing this update, any device attempting to communicate with the implanted pacemaker must provide authorization to do so.”

The Merlin Programmer (operated by healthcare workers) and Merlin@home Transmitter (patients’ home monitor) will provide such authorization.

In addition to this, for some devices (Accent and Anthem), the updated pacemaker firmware will also prevent unencrypted transmission of patient information.

Update instructions for physicians

Abbot Laboratories has provided instructions to physicians on how the update should be performed, noting that the new device firmware will be loaded in the Merlin Programmer along with new programmer software, and that the download of the update to the pacemakers will take approximately three minutes.

“There have been no reports of unauthorized access to any patient’s implanted device, and according to an advisory issued by the US Department of Homeland Security, compromising the security of these devices would require a highly complex set of circumstances,” the company made sure to note.

This is not the first time that Abbot/St. Jude Medical had to push out security updates for their pacemakers. In January, a security patch was provided, but patients didn’t have to come in for it to be implemented. Instead, they could do it via their Merlin@home Transmitter units.


from Help Net Security http://ift.tt/2x8v6Od

The security status quo falls short with born-in-the-cloud software

security status quo cloud softwareBorn-in-the-cloud software, pioneered by companies like Salesforce, are beginning to dominate the computing landscape. According to Gartner, by 2020, the cloud shift will affect more than $1 trillion in IT spending, and cloud computing will be one of the most disruptive forces since the early days of the digital age.

We all realize the opportunities abound. Gartner’s Ed Anderson says, “the cloud shift is not just about cloud. As organizations pursue a new IT architecture and operating philosophy, they become prepared for new opportunities.” He goes on to state, “organizations embracing dynamic, cloud-based operating models position themselves better for cost optimization and increased competitiveness.”

But, do we really understand the critical nature of born-in-the-cloud software’s greatest challenge – security?

Security solution landscape

To realize the security challenge, it’s critical to size up current solutions to see if we have what we need to protect cloud-based software. It’s equally important to identify the key characteristics of born-in-the-cloud software to know what constitutes a good solution.

These characteristics are:

  • Modern continuous integration and deployment (CI/CD) has enabled faster speed of software development (more features faster)
  • Monolithic applications of yesterday have given way to microservices, each of which can scale independently
  • The microservices are developed by small independent teams causing the environment to be polyglot – multiple programming languages are in use depending on the purpose of the microservice
  • Diversity in deployment methods means the microservices are deployed in different IaaS environments (AWS, Azure, etc.) and in different operating environments: virtual machines, containers, and perhaps tomorrow unikernels.

With this backdrop of how born-in-the-cloud software is significantly different let’s assess some of the key categories of security solutions.

Code analysis

These tools identify vulnerabilities in software so that they can be fixed before they become problems. The challenge is that they are slow and inaccurate. Above all, the developers are measured for speed and feature delivery, not on the security of their code. Consequently, code analysis is not used often. A 2017 Jet Brain poll of more than 5,000 developers showed that the majority of developers don’t use code analysis.

There is a fundamental disconnect between the promise of code analysis tools and the nature of cloud software, which if nothing else, is more agile.

Web Application Firewall (WAF)

A web application firewall (WAF) for HTTP applications applies a set of rules to an HTTP conversation. Generally, these rules cover common attacks such as cross-site scripting (XSS) and Structured Query Language (SQL) injections.

WAFs greatest limitation is that it is threat focused – matching the customer traffic to possible attacks, which causes the WAFs to be reactive and generate false positives. WAFs additionally apply the same set of signatures to any and all web applications. Some WAFs come with a learning mode that takes 2-3 days to understand the application to provide more targeted protection. In a CI/CD environment, this is too slow as the microservices environment could change multiple times a day.

Runtime Application Self Protection (RASP)

RASP is built into an application and can detect and prevent real-time application attacks. RASP improves on WAF by instrumenting the application in pre-determined key points of interest but the logic of detecting attacks is still the same, i.e., focusing on threats.

RASP offers some key advantages but has fallen short of its promise. Its adoption has been slow because it changes the application causing accountability issues (who caused the bug in the application – the application or the RASP agent?). Its effectiveness is limited to attacks such as XSS and SQL injection, and impacts the performance of the application.

Container security

Software containers are lightweight virtual machines with much leaner system requirements. Containers offer advantages to DevOps and are an essential tool in creating and protecting software applications.

Security for containers is surely desirable. But, I would argue that Amazon, Microsoft, and Google will (over time) protect the container’s attack surface, leaving security of cloud-born-software protection to us. Let’s not forget that we also need to protect software deployed on virtual machines!

Cloud compliance

When organizations move their software to the cloud with services such as Amazon Web Services (AWS), they take on unknown risks because of the IaaS. Cloud services like AWS provide Application Protocol Interfaces (APIs) to inform DevOps of what the software is doing in the cloud environment to minimize risk. Cloud compliance monitoring companies provide risk assessment analysis of born-in-the-cloud software via such APIs. It is a much-needed capability.

But, given that APIs are platform (e.g., AWS) specific as opposed to application specific, they can only inform of the symptoms of some attacks as opposed to the attacks themselves. AWS aptly recommends customers to be responsible for the security of their applications.

Thinking differently

While there is a lot of innovation happening in cybersecurity, the solutions available seem piece-meal, fragmented, and above else threat-focused. Today’s defense-in-depth approach is not designed for born-in-the-cloud software. It’s too heavy, capital and operationally inefficient, and is always reactive (in a world where we are seeing more than 100,000 pieces of new malware every day).

We need to think differently about securing software deployed in the cloud and fundamentally change our approach. As software proliferates and born-in-the-cloud software becomes the norm, the attack surface and exposure increases.

Even if cloud providers offer infrastructure protection, our key responsibility is to protect the software that we are hosting in the cloud. But, today’s software built and deployed in the cloud is like a living organism with continuously evolving components (microservices) – each with its own unique DNA. Each of these components needs to be protected.

We need to stop being reactionary and seek an approach that makes it possible to understand each piece of software, what makes it unique, and offer bespoke security that is tailored to the very needs of the software. Personalized medicine promises to do the same for healthcare leveraging one’s DNA – could we similarly extract the DNA of each piece of software and protect it?


from Help Net Security http://ift.tt/2gl1Jy2

A Framework for Cyber Security Insurance

New paper: "Policy measures and cyber insurance: a framework," by Daniel Woods and Andrew Simpson, Journal of Cyber Policy, 2017.

Abstract: The role of the insurance industry in driving improvements in cyber security has been identified as mutually beneficial for both insurers and policy-makers. To date, there has been no consideration of the roles governments and the insurance industry should pursue in support of this public­-private partnership. This paper rectifies this omission and presents a framework to help underpin such a partnership, giving particular consideration to possible government interventions that might affect the cyber insurance market. We have undertaken a qualitative analysis of reports published by policy-making institutions and organisations working in the cyber insurance domain; we have also conducted interviews with cyber insurance professionals. Together, these constitute a stakeholder analysis upon which we build our framework. In addition, we present a research roadmap to demonstrate how the ideas described might be taken forward.


from Schneier on Security http://ift.tt/2wTzz8d

News in brief: AI writes new GoT book; Google breaks out of the speaker; Cortana and Alexa hook up


Your daily round-up of some of the other stories in the news

Can’t wait for the next GoT book?

Game of Thrones fans feeling bereft after the end of season seven and fed up of waiting for creator George RR Martin to deliver the sixth of seven planned novels in the series, help is at hand. Well, sort of.

“Huge fan” Zack Thoutt of Udacity is busy training a neural network to write new chapters of “the book we’re all waiting for”.

As he explains on the Udacity blog, “writing the code for the model and training it only took a few days of work, and after turning the model’s hyperparameters, I started to get some interesting results”.

You can judge for yourself: Zack has posted the first five chapters of the new book on GitHub to read. Does a neural network mean that we can now discard the old metaphor of unlimited monkeys and typewriters, and will it keep you happy until spring 2019, when it’s thought that the next series of Game of Thrones makes it to air – assuming of course hackers don’t release it first, that is?

‘Hey Google, is my washing done?’

How do you feel about asking your washing machine if your laundry cycle is finished? Or telling your sprinklers to water the lawn – all without having to get off your sofa?

Just as we’ve started to get used to having smart speakers in our homes, with the Amazon Alexa devices – the Echo and the Dot – and Google Home, the next stage, according to Google, is adding its smart assistant to a much wider range of devices.

Google said at the annual IFA consumer tech show in Berlin that it was adding its Assistant to three other manufacturers’ devices: Anker’s Zolo Mojo speaker, the Mobvoi TicHome Mini and Panasonic’s GA10. And there are more partnerships to come – including LG, which, we learned at IFA, will soon add the ability to start the washing machine and check on your dryer’s progress.

‘Alexa, make friends with Cortana’

Meanwhile, another smart assistant partnership was being announced – an unlikely tie-up between the two giants of the Seattle area: Amazon’s Alexa and Microsoft’s Cortana.

The New York Times reported on Wednesday that the two companies have been collaborating for the past year to get the two assistants to talk to each other, and the functionality is expected to be rolled out by the end of the year.

What’s smart about this tie-up is that the two assistants are on very different devices and used in different ways. While Cortana is actually pretty good at the kind of queries you might ask of a mobile device, the death of the Windows Phone platform means that it’s an assistant that’s largely confined to PCs, while the Amazon devices have found a solid foothold in people’s homes – and are always on, which a laptop isn’t.

Amazon boss Jeff Bezos pointed out to the NYT that Cortana is deeply and very effectively integrated with Microsoft Office, which means for example that you could ask Alexa what time and where your next meeting with your boss is.

Catch up with all of today’s stories on Naked Security



from Naked Security http://ift.tt/2vKepV0

Bitcoin users, the taxman wants to know what’s in your piggybank


Can Bitcoin help you escape the latter half of Ben Franklin’s famous declaration that the only certainties in the world are death and taxes?

We’re likely to find out sometime in the coming months. While nobody has cheated death so far, some people figure the odds of avoiding taxes are way better – if the Internal Revenue Service can’t trace their profits.

But the IRS, very much aware of that kind of thinking, is working just as hard to reduce the odds on taxes to the same as those on death.

As is well established, Bitcoin – probably the best-known cryptocurrency – is widely used in the criminal underworld, since users can remain largely anonymous. But, besides the fact that the IRS demands that you pay your taxes on all income, including illegal income, it is going after those whose earnings may be totally legit but who also want to keep it all.

And while the lure of Bitcoin is anonymity, Big Brother is getting close to drawing back that veil. At a minimum, he knows where to look.

The Daily Beast reported last week that the IRS has had a contract since 2015 with Chainalysis, a New York-based company that markets a “Reactor” tool to track and analyze the movement of Bitcoin transactions. The goal of the agency is obvious – to “follow the money” as it moves from wallet to wallet, and eventually to an exchange where the owner cashes out in dollars or another fiat currency.

Based on pretty simple math, the IRS figures it’s more than worth the $88,700 it has reportedly paid Chainalysis so far – there’s likely a lot of people out there who aren’t paying their “fair share”.

For one thing, the gap between number of people dealing in Bitcoin and the number declaring income from it is wide – very, very wide.

The IRS said in court documents that between 2013 and 2015, fewer than 900 people per year reported income on Form 8949, which is used to account for “a property description likely related to Bitcoin”. That compares rather pitifully to the number of people using Coinbase – “the largest exchanger in the US of Bitcoin into US dollars,” according to the government – with 4.8m users and 10.6m wallets.

For another thing, it’s been a red-hot investment. Bitcoin’s US value was $13 at the start of 2013, and during the three-year period grew to nearly 85 times that, spiking to more than $1,100. Since then the percentage increase has slowed, but its US value was $4,563 at the time of publication, with some predictions that it could soar to $20,000 in the next three years, although there are others who think it’s a bubble, with about as much long-term value as a Ponzi scheme.

But, focusing on the three-year search of Form 8949s, the Department of Justice filed an ex parte petition last November in US District Court in California seeking authorization for the IRS to issue a “John Doe” summons that would require Coinbase to provide information on any US persons who

… at any time during the period January 1, 2013, through December 31, 2015, conducted transactions in a convertible virtual currency as defined in IRS Notice 2014-21.

The taxpayers being investigated have not been or may not be complying with US internal revenue laws requiring the reporting of taxable income from virtual-currency transactions.

With pretty good reason – Bitcoin and other virtual currency transactions don’t produce any third-party documentation, like the 1099s you receive from your bank or investment brokers.

Coinbase refused to comply, complaining that the summons was “indiscriminate and over broad”. And the company got some heavy hitters on its side in May, when several committee chairs in both the House and Senate sent a letter to IRS Commissioner John Koskinen seeking more information on the summons, which they said could affect as many as 500,000 people – 90% of whom they said were

… engaged in less than $10,000 in cumulative, gross transactions during the entire period requested.

Based on the information before us, this summons seems overly broad, extremely burdensome and highly intrusive to a large population of individuals.

Perhaps in response to that pressure, the IRS blinked in July – pretty big time. It reduced the scope of the summons to include only users who had made “at least the equivalent of $20,000 in any one transaction type (buy, sell, send or receive) …”

Which obviously would leave the vast majority of “less fortunate” Bitcoin players out of the dragnet.

Fortune reported in March that Coinbase CEO Brian Armstrong had offered to provide customers with 1099-B forms – the ones banks and brokerages provide. But the case is still ongoing.

And however it is resolved, this will not end the cat-and-mouse game. As a number of reports on the conflict noted, criminals are endlessly adaptive, and so are the tools created to serve them. Some have already left Bitcoin in favor of other virtual currencies like Zcash, which promises to “fully protect the privacy of transactions using zero-knowledge cryptography”, or Monero, which offers “secure, private, untraceable currency”.

Not to mention that, at least for now, they aren’t under the same level of federal scrutiny.



from Naked Security http://ift.tt/2wSNQBX

Proof that HMAC-DRBG has No Back Doors

New research: "Verified Correctness and Security of mbedTLS HMAC-DRBG," by Katherine Q. Ye, Matthew Green, Naphat Sanguansin, Lennart Beringer, Adam Petcher, and Andrew W. Appel.

Abstract: We have formalized the functional specification of HMAC-DRBG (NIST 800-90A), and we have proved its cryptographic security -- that its output is pseudorandom -- using a hybrid game-based proof. We have also proved that the mbedTLS implementation (C program) correctly implements this functional specification. That proof composes with an existing C compiler correctness proof to guarantee, end-to-end, that the machine language program gives strong pseudorandomness. All proofs (hybrid games, C program verification, compiler, and their composition) are machine-checked in the Coq proof assistant. Our proofs are modular: the hybrid game proof holds on any implementation of HMAC-DRBG that satisfies our functional specification. Therefore, our functional specification can serve as a high-assurance reference.


from Schneier on Security http://ift.tt/2wSw8hY

Protect Your Online Privacy With a Lifetime VPN Membership For Just $38

VPNs have been in the news lately, and if you’re ready to start protecting your online privacy, here’s one of the best deals we’ve ever seen.

$38 would normally be a solid price for a year’s subscription to a good VPN service, but today, that gets you a lifetime membership at Windscribe, which Lifehacker has given its stamp of approval. Just head over here to find the deal, and use promo code VPN4LIFE to knock the cost down to $38.

More Deals



from Lifehacker http://ift.tt/2vEgzGl

Trump’s cybersecurity advisers quit warning of ‘insufficient attention’


More than a third of the White House National Infrastructure Advisory Council (NIAC) has given President Donald Trump a failing grade on cybersecurity. But before that, they had a hand in a draft cybersecurity plan that could improve that grade.

A group resignation, which reduced the council from 28 to 20 members last week (three were Obama administration holdovers), came with a resignation letter protesting what the outgoing members said was Trump’s “disregard for the security of American communities”.

Much of their focus was on moral and environmental issues – what they said was Trump’s failure to “denounce the intolerance of hate groups,” after the violence in Charlottesville, Va., and his withdrawal from the Paris climate agreement.

But they also cited “insufficient attention to the growing threats to the cybersecurity of the critical systems upon which all Americans depend, including those impacting the systems supporting our democratic election process”.

They’re not the only critics. Sen. John McCain (R-AZ), chairman of the Senate Armed Services Committee and a regular critic of the president, recently had harsh things to say about both Trump and his predecessor, President Obama, when it comes to their leadership on cybersecurity.

Speaking at the Arizona State University Congressional Conference on Cybersecurity Conference last Wednesday, McCain said that as America’s enemies “seized the initiative in cyberspace, the last administration offered no serious cyber deterrence policy and strategy. And while the current administration promised a cyber policy within 90 days of inauguration, we still have not seen a plan.”

All of which is true, but all of which is not the whole truth. Trump has indeed been late – quite late – on promises regarding cybersecurity. He promised an executive order on it within weeks of his inauguration, and was reportedly due to sign it in late January, but it was delayed until May 11.

That order, however, did provide some specifics – it instructed federal agencies to implement the NIST Framework for Improving Critical Infrastructure.

It got mixed reviews from cybersecurity experts. Jacob Olcott, vice-president at BitSight and former legal adviser to the Senate Commerce Committee and counsel to the House of Representatives Homeland Security Committee, said it was “smart policy and a big win for this administration”.

On the other side, Daniel Castro, vice-president of the science- and tech-policy think tank Information Technology and Innovation Foundation (ITIF), called it “mostly a plan for the government to make a plan, not the private sector-led, actionable agenda that the country needs to address its most pressing cyberthreats”.

But such a plan could be in the works if the administration acts on a draft report approved just a couple of weeks ago by the NIAC, prior to the resignations.

The report, titled “Securing Cyber Assets: Addressing Urgent Cyber Threats to Critical Infrastructure“, is based on “the review of hundreds of studies and interviews with 38 cyber and industry experts, (which) revealed an echo chamber, loudly reverberating the enormity of the challenge and what needs to be done”.

It says that while both government and the private sector have

… tremendous cyber capabilities and resources needed to defend critical private systems from aggressive cyber attacks … today we’re falling short. Cyber capabilities and oversight are fragmented, and roles and responsibilities remain unclear. We’re simply not organized to keep up with the threat.

The report declares that “there is a narrow and fleeting window of opportunity before a watershed, 9/11-level cyberattack to organize effectively and take bold action”.

And that is followed by 11 recommendations, which include:

  • Establish separate, secure communications networks, specifically designated for the most critical cyber networks.
  • Facilitate a private-sector-led pilot of machine-to-machine information sharing technologies.
  • Identify best-in-class scanning tools and assessment practices, and work with owners of critical networks to scan and sanitize their systems.
  • Strengthen today’s cyber workforce by sponsoring a public-private expert exchange program.
  • Streamline and expedite the security clearance process for owners of the nation’s most critical cyber assets.
  • Rapidly declassify cyber threat information to share it with owners and operators of critical infrastructure.
  • Create a task force of experts in government and the electricity, finance and communications industries, to act on the nation’s top cyber needs with the speed and agility required by escalating cyberthreats.

All of which sounds a lot like a plan.



from Naked Security http://ift.tt/2gptTLU

Advantech fixes serious vulns in WebAccess HMI/SCADA software

Advantech has plugged nine security holes in WebAccess and has urged users to upgrade the software as soon as possible.

serious vulns WebAccess

Advantech WebAccess is a web browser-based software package for human-machine interfaces (HMI) and supervisory control and data acquisition (SCADA).

A variety of vulnerabilities

The vulnerabilities, fixed in the latest version of the product, range from SQL injection flaws to buffer overflows, from incorrect privilege and permission assignment, to improper authentication vulnerabilities.

If exploited, they could lead to account modifications, privilege escalation, information leakage, remote code execution, and system crashes.

The good news is that they were discovered by security researchers, and there are no known public exploits for them.

Upgrade WebAccess to stay safe

ICS-CERT advises users to upgrade to WebAccess V8.2_20170817, as well as to take defensive measures to minimize the risk of exploitation of these vulnerabilities.

This could be achieved by minimizing the network exposure of these systems, putting them behind firewalls and isolating them from the business network, and using VPNs when remote access to them is required.


from Help Net Security http://ift.tt/2vrBfW4

The NSA's 2014 Media Engagement and Outreach Plan

Interesting post-Snowden reading, just declassified.

(U) External Communication will address at least one of "fresh look" narratives:

  1. (U) NSA does not access everything.
  2. (U) NSA does not collect indiscriminately on U.S. Persons and foreign nationals.
  3. (U) NSA does not weaken encryption.
  4. (U) NSA has value to the nation.

There's lots more.


from Schneier on Security http://ift.tt/2wRZD3r

WireX botnet offers glimpse of Android DDoS threat


A consortium of internet companies has disrupted a botnet called WireX that has plagued Content Delivery Networks (CDNs) with nuisance DDoS attacks in recent weeks.

There’s nothing special about DDoS attacks or botnets but we’re writing up WireX for several reasons, starting with the fact it was built from infected Android devices.

Given that researchers believe it might have infected 140,000 devices in 100 countries by its peak on August 17, that’s a big DDoS botnet by Android standards, perhaps the biggest ever.

The source of infection was any one of 300 apps downloaded from the Google Play Store that had somehow sneaked past the store’s much vaunted security algorithms.

Despite what Google says, it’s perfectly possible to do this, as demonstrated by a separate incident this month when 500 applications (with 100 million downloads) were yanked after a mobile security company discovered an embedded advertising SDK was being used to update them with spyware.

The WireX-infected apps, by contrast, hid their malevolent behaviour behind ordinary-looking media players, ringtones and storage managers. Designed to launch DDoS attacks in the background (in other words, when the device is turned on but not in use), it’s possible owners would have been unaware of anything untoward.

The companies believe it sprang into life around August 2, growing rapidly to its peak in the middle of the month when they decided to collaborate to track down what was behind this sudden DDoS spike.

It’s not clear whether it was the size of the attacks that caught their attention or the unusual way traffic from it was distributed across many countries. That WireX appeared suddenly would have stood out.

Probably built on the skeleton of an old click-fraud app, WireX isn’t even that sophisticated, relying on throwing lots of HTTP traffic at target websites until they choke.

It’s a simple tactic but also clever because the traffic looks legitimate. This makes it tricky to stop without taking servers offline, which is why researchers pooled resources to root out the botnet’s infected clients the hard way.

WireX did, at least, bring everyone together in a matter of days. Said participant Akamai:

In the wake of the Mirai attacks, information sharing groups have seen a resurgence, where researchers share situation reports and, when necessary, collaborate to solve internet-wide problems.

This would have meant sharing competitive data such as IP addresses, request headers and, in WireX’s case, DDoS ransom notes sent to CDNs. Privacy concerns mean that doing this isn’t always as simple as it might seem from the outside.

Which devices are vulnerable?

Given that infected apps were downloaded from the Play Store (their names haven’t been revealed), any version of Android they were compatible with could have been targeted. Devices running Android security software such as Sophos Mobile Security for Android will detect WireX, with some identifying it as generic click fraud malware.

Command and control domains are identified in the WireX advisory, published by researchers. It’s possible that a temporary defence against WireX would be to set “restrict background data”.



from Naked Security http://ift.tt/2whHjiM

Cisco unveils LabVIEW code execution flaw that won’t be patched

LabVIEW, the widely used system design and development platform developed by National Instruments, sports a memory corruption vulnerability that could lead to code execution.

LabVIEW code execution flaw

LabVIEW is commonly used for building data acquisition, instrument control, and industrial automation systems on a variety of operating systems: Windows, macOS, Linux and Unix.

The vulnerability (CVE-2017-2779)

The vulnerability was discovered by Cory Duplantis of Cisco Talos earlier this year, and reported to the company.

It can be triggered by the victim opening a specially crafted VI file – a proprietary file format that’s comparable to the EXE file format.

“Although there is no published specification for the [VI] file format, inspecting the files shows that they contain a section named ‘RSRC’, presumably containing resource information,” Cisco noted.

“Modulating the values within this section of a VI file can cause a controlled looping condition resulting in an arbitrary null write. This vulnerability can be used by an attacker to create a specially crafted VI file that when opened results in the execution of code supplied by the attacker. The consequences of a successful compromise of a system that interacts with the physical world, such as a data acquisition and control systems, may be critical to safety.”

More details about the flaw can be found in this report. It affects the latest stable LabVIEW version (LabVIEW 2016 version 16.0), but it’s possible that earlier iterations are also vulnerable.

There will be no patch

“National Instruments does not consider that this issue constitutes a vulnerability in their product, since any .exe like file format can be modified to replace legitimate content with malicious and has declined to release a patch,” the researchers noted.

“Talos disagrees. There are similarities between this vulnerability and the .NET PE loader vulnerability CVE-2007-0041 which was patched in MS07-040. Additionally, many users may be unaware that VI files are analogous to .exe files and should be accorded the same security requirements.”

Since a patch is not forthcoming, LabVIEW users would do well not to open VI files of unchecked provenance. Also, two Snort rules have been made available for detecting exploitation attempts (41368, 41369).


from Help Net Security http://ift.tt/2wnKMLf

Drone maker DJI launches bug bounty program

Chinese consumer drone maker DJI has announced that it’s starting a bug bounty program and has invited researchers to discover and responsibly disclose issues that could affect the security of its software.

DJI bug bounty

“The DJI Threat Identification Reward Program aims to gather insights from researchers and others who discover issues that may create threats to the integrity of our users’ private data, such as their personal information or details of the photos, videos and flight logs they create. The program is also seeking issues that may cause app crashes or affect flight safety, such as DJI’s geofencing restrictions, flight altitude limits and power warnings,” the company said.

The website for the program and a form for reporting potential threats is yet to be set up, so specific details are scarce. What is known, though, is that rewards for qualifying bugs will range from $100 to $30,000 – the final amount will depend on the potential impact of the threat.

Until the website is ready, researchers can send in bug reports to bugbounty@dji.co.

The company has noted that this step was long overdue.

“DJI has not previously offered formal lines of communication about software issues to security researchers, many of whom have raised their concerns on social media or other forums when they could not determine how best to bring these issues to DJI’s attention,” they pointed out.

The decision to start a bug bounty program comes mere weeks after the U.S. Army ordered its members to stop using DJI drones because of cyber vulnerabilities.

Whether this order was due to the fact that the drones can send flight logs, photos or videos to DJI’s servers is unknown, but the company is working on making it possible for users to disconnect the drone from the Internet while it’s flying, so that the data can’t be sent to servers even by mistake.

Also, the company has been struggling to prevent users from modifying their drone’s firmware so that it can enter no-flight zones (e.g. airports, military installations) defined by the company.


from Help Net Security http://ift.tt/2wIzrYu

When AI and security automation become foolish and dangerous

AI security automation dangerousThere is a looming fear across all industries that jobs are at risk to artificial intelligence (AI), which can perform those same jobs better and faster than humans. A recent Forrester report predicts automation will replace 17 percent of U.S. jobs by 2027, only partly offset by the 10 percent growth in new jobs predicted to result from the automation economy.

As a vendor, the idea of automation is highly alluring. Automation is technology. Technology provides significant and material financial incentives over its unpredictable and fallible human counterparts. Perhaps most tellingly, automation is a key component of most vendors’ ROI stories, meaning it’s a powerful tool in the “buy our product and we will save you money” toolbox.

But should organizations really be sprinting headlong into automation? There is no question that automation delivers significant value to organizations. Repetitive and boring tasks waste valuable time and result in unhappy and unengaged employees. Similarly, other types of tasks, specifically those required to analyze large data sets, are better performed by computers. If automating these types of analytical tasks provides business value, organizations should certainly examine those options.

Implementing some automated solutions can prove valuable. However, when it comes to network security, fully automating the tasks of a security analyst can be a dangerous and foolish decision for a variety of reasons.

1. Cybersecurity threats are not software – they’re creative humans with latitude

Attackers are both intrinsically and extrinsically motivated to bypass controls, whether those controls are automated or otherwise. For attackers, breaking into your network is not only lucrative, it’s fun. That point is important enough it bears repeating: Hacking is fun.

As a defender, combating this creativity with automation is likely impossible, as the effectiveness of automation is predicated on the components of the system being automated. The failures in the components of the system become failures of the automated system itself. Motivated attackers have, and will continue to bypass these automated controls, patiently waiting and looking for flaws in automated defenses and the processes built on top. Why? Because when people are engaged in something fun, they are motivated by the pursuit itself and will work until they succeed.

2. Automating too many things makes hiring harder – not easier

If a process to be automated is not well understood, it can’t be automated. So, the tasks that tend to be automated are the simple ones, which leaves the more complex ones left over. This is a principle known in the study of automation as the “Left-over Principle.”

This poses a new problem for organizations implementing automation. Hiring people with appropriate skill sets to accomplish even the simple tasks of a security analyst is already a challenge. According to cyber security employment data tool CyberSeek, every year in the U.S., 40,000 jobs for information security analysts go unfilled. In most enterprises, the average turnover rate for security analysts is less than 2 years – a problem that is exacerbated by the current cybersecurity skills gap.

Think you’re having trouble finding people to do the simple tasks well? Wait until you mostly only have difficult and complex tasks left. Because there is no thoroughly effective cybersecurity preventive/detection/control system, automation doesn’t eliminate the need for humans – but creates a new requirement for a human auditor and manager of the automated system itself. Additionally, you still need to find analysts specialized in the more complex and experience-dependent investigative tasks leftover by automation, but automation can create problems here too, as highlighted next.

3. Automation can widen the cybersecurity skills gap

As mentioned above, the complicated leftover tasks require highly specialized skills from a small pool of qualified applicants that are difficult to attract. These leftover tasks require almost turbulently flexible judgment and creative application of different analytic methods.

Frequently engaging in simple tasks is proven to keep one’s skill and perspectives more broadly relevant. It also makes you more well equipped for evolutions in the rarer and more complex tasks built upon simpler building blocks. For example, studies of novice versus experienced professionals across industries frequently show it’s not the difference in knowledge, but the difference in repeated and broad experience between the two groups that form the basis of improved decisions in experienced professionals (aka: procedural knowledge). In other words, performing simpler tasks makes you better at harder tasks.

Why is automation getting so much attention throughout the security industry?

For starters, there is currently significant investment in automation, independent of the thoughtfulness applied to the benefits and challenges it presents.

However, the concept is gaining material traction in enterprises for a reason: there is real pain and material risk being experienced by enterprises from the deluge of information currently blasted at security teams.

Automation does indeed help solve certain aspects of this problem in enterprises. Repetitive and boring tasks, highly error-prone tasks, data entry, and identifying patterns in large amounts of data that are not currently automated are all examples of the types of automation that make teams more effective. However, like previous fads in the security industry, automation has its share of benefits and challenges, and those challenges can have a net-negative effect (and a new cost center) if not well-understood.

As long as there is a creative human attacker, automation should not be about replacing people. It’s about allowing people to reclaim the ability to do the functions they do better than machines, more efficiently and effectively. This is why you have security analysts in the first place: to discern the legitimacy of behavior, infer intent, and respond appropriately.


from Help Net Security http://ift.tt/2x35G4w

News in brief: Turing’s documents found; Uber steps back on tracking; feathered threat to police


Your daily round-up of some of the other stories in the news

Alan Turing’s documents uncovered

A collection of letters from Alan Turing, one of the founding fathers of modern computing and a brilliant cryptanalyst, has been uncovered in an old filing cabinet at the University of Manchester – and reveal that the mathematician, who moved to the university after the second world war – was not a fan of the United States.

Turing, who had led the codebreaking efforts at Bletchley Park that was credited with helping shorten the war, became deputy head of the university’s computing lab in 1948, and it was one of his modern-day successors at the university, Professor Jim Miles, who found the letters. He explains: “I was astonished such a thing had remained hidden out of sight for so long. No one who now works in the school or at the university knew they even existed.”

The cache of correspondence includes Turing’s notes on artificial intelligence for a BBC programme, and correspondence about invitations to lecture in the US, which Turing turned down flat, saying: “I would not like the journey, and I detest America.”

The collection is available to researchers at the university’s library. Said Miles: “It really was an exciting find and it is a mystery as to why they had been filed away.”

Uber pulls controversial tracking feature

Uber is to pull a feature in its app that continued to track users for five minutes after they get out of their driver’s car, the beleagured ride-sharing company said.

The company, which has faced a series of crises that culminated in its founder, Travis Kalanick, leaving, will roll out the update to the app this week.

The update will restore users’ ability to limit its ability to gather data only when it’s actively being used. Since November, users have either had to consent to the app collecting their location data all the time, or not at all. The latter option meant users had to manually enter their location into the app when booking a cab.

Joe Sullivan, chief security officer, told Reuters that reinstating that option wasn’t connected to the C-suite upheavals at the company: “We’ve been building through the turmoil and challenges because we already had our mandate.”

Feathers ruffled as emergency number falters

Just when you thought you’re on top of your cybersecurity with your gateway protection, phishing mitigation, firewalls, endpoint protection etc comes a whole new threat, as Avon and Somerset Police in southern England found out on Monday: a stray owl.

The police force had to urge locals only to call the 999 emergency number if it really was urgent after the bird flew into power cables, taking out the power supply at the force’s HQ near Bristol. The force’s staff had to come in to help provide a back-up service on Monday, which was a holiday in the UK.

This isn’t the first time wildlife has proved a threat to critical infrastructure: the Cyber Squirrel 1 project tracks animal damage around the world, with birds the second most common agents of disruption.

The force said that full service was finally restored on Monday afternoon, and added: “We certainly hope our feathered friend escaped without injury and was unaware of the feathers he ruffled.”

Catch up with all of today’s stories on Naked Security



from Naked Security http://ift.tt/2vH3dbF

Sedating a Young Child for Dental Work Should Be a Last Resort 

The New York Times reports that some dentists worry that sedation and anesthesia are “overused as profit-making tools,” and in kids, the practice can be dangerous, or even deadly. A University of Washington study found 44 cases over three decades in which patients died after sedation or general anesthesia for dental work. Most of those patients were two to five years old.

Children under six don’t have as much of an oxygen reserve in their blood as older children or adults, so their bodies can’t compensate for short lapses in oxygen, one pediatric dentist explained. Experts say sedation should not be a first-line treatment—instead, parents should consider and discuss alternatives such as silver diamine fluoride, a liquid that can be brushed on less advanced cavities to stop the infection, or placing a temporary filling until the child is old enough to sit still for a regular one.

Should Kids Be Sedated for Dental Work? | The New York Times


from Lifehacker http://ift.tt/2wfwuxv

Are you an adrenaline junkie who takes risks with security?


Lee Hadlington is a cyberpsychologist at De Montfort University. He researches how psychology plays a role in cybersecurity. He recently conducted a study to find out if personality characteristics such as impulsivity and “internet addiction” determine whether people are conscientious or risky in their cybersecurity behavior. The study’s paper was published in July.

For the study, 538 people in the UK who are employed completed an online questionnaire. The subjects ranged in age from 18 to 84, with 218 males and 297 females.

Some of the risky cybersecurity behaviors asked about in Hadlington’s study include:

  • Sharing passwords with friends and colleagues
  • Disabling antivirus software to access blocked content
  • Using the same password for multiple websites
  • Sending personal information to strangers on the internet
  • Downloading digital media such as music and video files from unlicensed sources
  • Entering payment information on websites that have no clear security information

Hadlington used Mark Griffiths’ criteria for internet use disorder in his definition of internet addiction. Griffiths defines internet use disorder as a compulsive need to engage in online activities to the detriment of other areas of a person’s life.

However, concepts such as “internet addiction” and “videogame addiction” are controversial in the psychological community.  Dr Anthony Bean, a clinical psychologist, was recently interviewed in Polygon about his skepticism about video game addiction.

One of the major concerns that we have is that we’re putting the cart before the horse on his one. We don’t know what videogame addiction is. The psychology and medical fields took the concept of addiction — whether it’s substance abuse or anything like that — and just switched it out with video games. The thinking was, ‘Oh, it’s a form of addiction. It’s like any other addiction.’ But it’s not the same.

You could do the whole process over again with football. Why are we not considering that an addiction? What about someone who really likes to go into a library and read books, and they just can’t put that book down because they’re at that great part that they want. You force them to put that book down, [and] their mind’s just going to be on it. Why isn’t that a form of addiction?

So is characterizing these as addictions an unnecessary pathologization? Says Hadlington:

I think there are two issues here – addiction is a clinical term, which requires a formal diagnosis, and in the context of my work I accept that this is an issue. I think we look at more the issue of problematic use – and internet addiction is an umbrella term through which other aspects of digital technology addiction are actioned – if that makes sense. I haven’t seen anything as of yet [for internet addiction in psychiatry’s DSM-V diagnostic manual], butthe very term is problematic – from a research perspective it’s used as a label at the moment.”

Nonetheless, according to the definition of internet addiction that Hadlington and Griffiths accept, the study found a correlation with risky cybersecurity behavior. Richard Davis’ Online Cognition Scale was used to determine if subjects in Hadlington’s study were addicted to internet use. From the paper:

The results demonstrated that internet addiction was a significant predictor for risky cybersecurity behaviors.

The study used Christopher Coutlee’s Abbreviated Impulsiveness Scale to determine if subjects were impulsive. Hadlington’s study found another correlation:

The measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor.

So how can businesses help their employees do better with cybersecurity? Hadlington responds:

I think first of all they need to understand what is going on within their organization. Rather than spending money on making password protection really good, they might already have this covered – then it is a matter of finding out what works. We know from research that online training and emails about cybersecurity really don’t work. You need to connect with employees, so focus groups and guest speakers appear to be most effective at changing behavior.

How could a focus group be implemented?

It takes very little time and money to get involved in academic research that could help a company identify the key risks, which could in turn save them millions in the long run. Focus groups are really easy to do, and you can introduce the topic (such as online security) and ask people about their concerns. Often you seen that groups have the same concerns, which the focus group lead can then offer advice on.

So it seems that people who engage in risky behavior in other areas of their life are more likely to also engage in risky behavior in their computer and internet use. Thankfully, people can learn to engage in better cybersecurity behavior, and teaching them in person and asking for feedback is more effective than indirect training methods such as sending them emails.



from Naked Security http://ift.tt/2gnkzby